Thursday 29 January 2009

Deep Zoom Tool For Microsoft Advertising


For the last month, we have been working on a Silverlight based sales tool for Microsoft Advertising and I'm pleased to say that it has now gone live!

The Sales team is to use this tool to show the advertising opportunities across the entire set of Live based sites including the new Home site, SkyDrive, Mail and Messenger among others.

The tool utilises the new Deep Zoom technology to provide a really cool and intuitive interface for viewing the advert space available across these sites. The information is given to the user through a Hotspot system that displays the advert information when the user hovers over an advert on one of the pages.

Download a copy and have a play for yourself by visiting www.windowslivetogether.com.

I have put together a quick screencast that will show you how to get set up and explore the features the tool provides.

 

Have Fun!

Solver Foundation

Microsoft’s Solver Foundation (http://code.msdn.microsoft.com/solverfoundation), not to be confused with the Solver Add-In for Excel, is a runtime for various types of mathematical problem solving. The download includes solvers for a number of specific types of problem (Linear Programming, Mixed Integer Programming, Quadratic Programming and Constraint Programming) and enables other 3rd party solvers to be plugged in to the runtime.

The provided solvers cover a more general area known as Operational Research, that roughly speaking is a set of mathematical approaches to find the best (or sometimes nearly the best) solutions to complex problems.

A typical example of a linear programming type problem might go something like this. A global manufacturing company has the capability to manufacture its products at a number of different factories in different countries, and these factories have different costs associated with the manufacturing process and different maximum production capacities. The goods produced need to be transported to their target markets (again there will be different shipping costs for each factory/market combination) and there are forecasts for the amount of each product that will be sold in each market. The problem is to try and minimise the costs involved and therefore maximise profits.

An example of a constraint problem might look like this. You have a set of servers with a range of specifications covering things like; memory, number of processors, disk space, installed web server, installed database server. You also have a set of applications that need to be deployed, and each application has a set of requirements in terms of memory usage, processor usage, disk space, web server and database server. You need to work out which applications should be installed on which machine, possibly taking into account other constraints such as application A cannot be on the same machine as application B, and applications C and D must be installed together.

One of the interesting samples included in the download applies this type of problem solving to search by implementing a guided search mechanism. For example an online retailer sells laptops which can be categorised in different ways; by price, brand, weight, operating system, memory, disk space, processor, extended warranty, rating, etc. The problem in providing a search facility to customers where they can select values for any of these criteria is that many combinations won’t return anything at all, making people think (wrongly) that the choice offered by the site is limited. Guided search solves this problem by limiting the available options based upon previous selections, so for example if I selected price under £1,000 and weight under 2kg I would find that my choice of brands and disk capacity had been limited. Notice that you can’t use a hierarchy to structure your data as you’ve no idea which criteria the user is going to start with as the basis of their search.

So how does this all work with Solver Foundation? The first step is to express the problem as a model, using a modelling language called OML which looks like this:

Model[
  Decisions[
    Reals,
    SA, VZ
  ],
  Goals[
    Minimize[ 20 * SA + 15 * VZ ]
  ],
  Constraints[
    0.3 * SA + 0.4 * VZ >= 2000,
    0.4 * SA + 0.2 * VZ >= 1500,
    0.2 * SA + 0.3 * VZ >= 500,
    SA <= 9000,
    VZ <= 6000,
    SA >= 0,
    VZ >= 0
  ]
];

 

The decisions are the data we’re working with, the goals are our targets and the constraints are the limitations we have to work with. Translating your real world scenario to this kind of model can take a bit of thought!

The model can then be solved using one of three techniques:

  1. It can be passed to a command line utility.
  2. It can be processed by an Excel add-in (not the Solver Add-in that comes with Excel) that enables you to bind cells to the model for both input and output.
  3. It can rewritten to a .NET managed API that enables finer control over the solver and binding input and output data via LINQ.

The important point to take away from this is that you don’t have to solve the problem, you do need to express the problem using the modelling language and possibly bind the model to external data.

Monday 19 January 2009

SQL Server Reporting Services - Report Builder 2.0

SQL Server 2008 Reporting Services (SSRS 2008) includes the Report Designer and Report Builder 1.0 tools, which you can use to create reports ranging from the very simple to the highly sophisticated. Report Designer is hosted in Business Intelligence Development Studio (BIDS), so you get all of the fine control you need to get your reports exactly as you want them. However, BIDS is a daunting environment for non-developers and it requires a considerable investment on the part of the user in order to become proficient in its use. Report Builder 1.0 (RB 1.0), on the other hand, is targeted at non-developers such as information workers, who need to be able to create ad-hoc reports quickly and easily. It provides easy to use drag and drop functionality in a Microsoft Office-like environment and shields users from the complexity of the underlying data source(s) by enabling them to interact with report models. A report model is an abstraction layer that sits over the top of a data source and exposes the underlying data using business-oriented language that is more meaningful to the end user.

Report Builder 2.0 (RB 2.0) is a new tool that was made available as a separate download at the end of October 2008. It offers various improvements over RB 1.0, but it does not replace it directly; you can use both tools side by side, if required. Whereas RB 1.0 is a ClickOnce application that can be installed by users from the same report server web site where they access their reports, RB 2.0 is a standalone application that must be installed separately where required. (RB 2.0 is scheduled to be released as a ClickOnce version as part of SQL Server 2008 Service Pack 1 – you’ll then be able to choose whether version 1.0 or 2.0 is installed when users click the link on the report server web site).

So what does RB 2.0 offer that RB 1.0 doesn’t? Well, firstly the Office-like theme is continued, but the look and feel are more like Office 2007; for example, there is a ‘ribbon’ in place of the older style toolbar (Figure 1). Whilst this is a matter of personal preference, I see this as an improvement.





Figure 1 – The Report Builder 2.0 user interface

Report creation wizards
RB 2.0 simplifies report creation with the new wizards for creating table, matrix and chart based reports, which guide you through the process. You can select a data source (see below), drag and drop is used to add fields to the report and you can arrange fields into columns, rows and values by using selection boxes. Once you’ve created a basic report, you can easily modify it by adding data regions including lists and gauges (new in SSRS 2008) and report items including images and text boxes.
Access to shared data sources
One of the drawbacks of using RB 1.0 is that it requires a report model to be created in advance so that it can interact with it. RB 2.0 can also access report models, keeping report creation easy for information workers, but it can additionally work with other data sources directly. These can be shared data sources that already exist or embedded sources that you create using RB 2.0 itself. The report creation wizards include connectors for a wide variety of sources (Figure 2). Once you have created your source, you can select the data that you wish to extract by using a text based editor (or in the case of SQL Server connections, a graphical editor).




Figure 2 – Data source properties dialog box


Editing of reports stored on the report Server
RB 2.0 supports the editing of reports that are stored on the report server. This enables users to use RB 2.0 to customize reports that were created and published by using Report Designer or RB 1.0. RB 1.0 is limited in this respect as reports that you create using RB 1.0 and then open and modify using Report Designer cannot then be opened again in RB 1.0, which is somewhat restrictive. Now, a developer can create and publish a sophisticated report using Report Designer, and users can access and modify the report themselves using RB 2.0 or Report Designer without any problem.

Summary
Because of these (and other) improvements, RB 2.0 will be attractive not only to information workers, but for developers too. True, it doesn’t support the full range of functionality that BIDS does, but it supports most of the major items – and it has the added benefit that you don’t need to install BIDS in order to create advanced reports.
For more information on Report Builder 2.0, visit Books Online at http://msdn.microsoft.com/en-us/library/dd207008.aspx

Friday 16 January 2009

F# Language Basics

In this, the second post on F# I am going to walk through a simple application by way of an introduction to the language.

I am assuming that you have been able to download and install F#, which is freely available for Visual Studio 2008.  If not you can download it here: http://research.microsoft.com/fsharp

Creating a new F# project

Open Visual Studio 2008 and create a new project:

 

When creating a new F# project you have 3 options

·         F# Application

·         F# Library

·         F# Tutorial

An F# application creates a complete stand alone application that can be compiled into a standard .EXE file.

An F# Library allows you to create an assembly that you can then use in your other .NET applications.

The F# tutorial offers some pre generated code that allows you to quickly get up to speed with the language.

For this example you need to select F# Application.  Choose an appropriate name and location and then click OK.

Once the solution is created in Visual Studio you will be presented with the source code view of Program.fs which looks like this:

#light


Not particularly exciting and maybe even slightly confusing!

#light should always appear as the first (non comment) line of your application.  The reasons for this are complex and beyond the scope of this post so for now just accept it and lets hear no more about it!

The application that we are going to write will simply apply a calculation to every value in a list and display the results to the screen.  This is deliberately simple to highlight a few of the key concepts of the language.

In Visual Studio, enter the following code:

#light

let square x = x * x

This defines a function called square that returns the square of x.  To a mathematician this makes no sense at all as there is no known value where x is equal to x multiplied by x, however as programmers I trust your still with me!

There are two important things to notice here.  Firstly, the let keyword, which is one of the three most important keywords in F#.  Using let allows you bind a value to a symbol.  This should not be confused with a variable where you typically assign a value to a symbol.

The other important thing here is the lack of any type declaration.  If we were to write this function in C# we would have something like this:

public static int square(int x)

{

    return x * x;

}

Here we have to explicitly specify the type of x as well as the return data.  In F# the compiler infers this information automatically.  This is known as type inference.

Now we need to declare a list of integers from 1 to 15 (or higher if you like) so enter the following code:

let numbers = [1 .. 15]

Lists are the backbone of functional programming and in F# all lists are immutable linked lists.

Now we need to apply the function square to each value in the list numbers and we can do it without a single for loop by using the following code:

let squares = List.map square numbers

Now this is a little more complex so we can break this down as follows.

We are declaring a symbol called squares whose value is the result of evaluating the function: List.map square numbers

List.map generates a new collection whose elements are the result of applying a given function to each element of a collection.  It takes a function and a collection as parameters.

In essence what we are doing here is passing a function as a parameter to another function.  In functional programming, passing functions as values is known as first order functions and is a key concept in functional programming.

At this point your code should look like this:

#light

let square x = x * x

let numbers = [1 .. 15]

let squares = List.map square numbers

If you hover your mouse over squares, Visual Studio will tell you that this is a list of integers.  Type inference at work again!

To display the contents of sqaures we can use the following code:

printfn "Numbers squared = %A" squares

printfn is a simple (and type safe) way to print text to the console.  Where F# differs from C# is the use of a format specifier.  In this example we are using %A which is a general format specifier that prints any type.

Other format specifiers are:

%f – Prints floats

%b – Prints bools

%d – Prints integers

%s – Prints strings

Finally complete the application by adding the following code:

open System

Console.ReadLine()

open System tells the F# compiler to load the System namespace and bring it into scope.  This is the same as writing using System; in C#. 

This code shows how easy it is to use .NET libraries within your F# code and you really can call any library you like.

Console.ReadLine() just pauses the application so that you can see the output before the window closes.

Your finished application should look like this:

#light

let square x = x * x

let numbers = [1 .. 15]

let squares = List.map square numbers

printfn "Numbers squared = %A" squares

open System

Console.ReadLine()

Run the application and you should see the following output:

Hopefully this has given you a little insight into the potential of this language.  Next time I will get a little more in depth as we take on something a little more challenging!

 

Introduction to Microsoft F# - Part 1

I have recently been tasked with putting together some internal training materials on Microsoft’s new F# programming language. F# is a functional programming language, and in this post I aim to give you an overview of its many benefits and hopefully inspire you to add it to your development toolset. In later posts we will start delving into the language and get our hands dirty with some code.

As many of you may know, F# has been knocking around for some time now; originally starting out at Microsoft Research in Cambridge as a pet project of Don Syme. So popular has the language become that it is about to break into the mainstream when it is officially released as part of Visual Studio 2010.

So what is functional programming?

Functional programming is a method of programming that allows computation based upon the evaluation of mathematical functions. Typically a functional language does not contain state or mutable data and the emphasis is very much on the application of functions. Imperative and OO based languages such as C# however allow changes in state and data types within these languages are very often mutable.

Functional programming can trace its roots back to the early 1950 with languages such as LISP, however they have never really gained momentum outside of the academic and scientific domains.

As a .NET developer functional programming is not new to you. Features common place in .NET such as generics, and LINQ all have their roots in functional programming.

What does F# bring to the party?

F# brings typed functional programming to the .NET Framework. It is very succinct and expressive and allows for a new style of application development. As a (soon to be) fully CLR / CLS compliant language an F# application or library has full access to the entire range of .NET Framework APIs and is fully interoperable with other .NET languages such as C# and VB.NET.

Pure functional programming is often the best approach for solving complex computational problems however the traditional languages such as Haskel, Lisp and OCaml suffer with a lack of interoperability. F# is a natural extension of these languages in that it combines the 3 main programming paradigms (imperative, object orientated, and functional). Doing so gives us a general purpose .NET language that can be used in any style that takes your fancy! In fact you can use all three approaches within the same code.

One of the biggest benefits I have found in using F# is that not only is it a strongly typed language, it also offers excellent type inference. This means that as a developer you no longer need to explicitly specify a type. The only exception to this is when the type is ambiguous. I will look at this in more depth over the next few posts.

Why should I use F# ?

F# is the only .NET language to offer a combination of scripted, functional, object orientated and imperative programming. It allows you to solve highly complex computational tasks with relative ease and its fully supported by all other .NET languages. So if your ever faced with a problem that your struggling to solve in C#, give F# a go and see how easy it can be.

Where can I get F# ?

F# is currently a CTP release and is available for Visual Studio 2008. For more information on F# visit http://research.microsoft.com/fsharp

Monday 12 January 2009

The Windows 7 Taskbar Part 1

I've recently been playing with the beta build of Windows 7 that Microsoft have released, and I am very impressed. The Windows 7 team have come up with a brilliant new taskbar that's user friendly, cleaner, more intuitive, and allows easier and quicker access to launch your favourite applications!

The taskbar in Windows 7 is customisable, you can choose your favourite apps, and drag them onto the taskbar.

By dragging your favourite apps onto the taskbar, you are enabling a whole lot of great functionality! Read on and see what you can do…

Switching and Launching Applications

Ask yourselves these questions; Are you ever irritated by opening the wrong document on your taskbar? Are you ever confused by switching and launching programs? I can safely say you will never have these problems again!

So you click on your new app you dragged on your taskbar, and up pops the app… well obviously! Let's use Microsoft Word as an example. You decide to open two different saved Word documents, and in the usual way, you open these two different documents via the File and Open menus. You now want to successfully and easily switch to and from these two documents, and you can do this in a number of ways. You can hover over your app icon on the taskbar, and two thumbnails will appear (as in Vista), but this time the thumbnails are grouped. The thumbnails are larger than the ones shown in Vista, so you can easily read the title of the document, and click on the one you wish to switch to.

You can also hover over each individual thumbnail, without actually clicking on either of them, and the thumbnail you are hovering over, will appear in the foreground of your screen, allowing you to “peek” at the document, the background turns to glass so your focus is completely on the document you are hovering over. This allows you to quite clearly see which document is which, once you decide which doc you want to switch to, simply click on that thumbnail.

If you just wanted to “peek” at the documents, and not switch to either of them, move your mouse away from the thumbnails, and you will be returned to the original state of your desktop.

You can have as many apps running as you like, and you will still be able to hover over the app icons on the taskbar, view all windows that are currently open, and click on them if necessary.

Grouped thumbnails are a brilliant idea for the majority of users, however for those of you who on average have more than 15 windows open in one application (that’s around 5% of users) I don’t think grouped thumbnails are for you. The good news is, you can turn off grouped thumbnails if you wish.

Jump Lists

After you have customised your taskbar to include all your favourite app icons, you can do some really interesting and helpful things.

If you right-click on each app, you get a kind of mini-start menu, where you will be presented with a list of things that you do mostly in that app. Lets use Internet Explorer as an example now. Simply right-click the IE icon, and you are presented with a jump list of recently visited Web sites. You can click and jump to previous Web sites within seconds.

A jump list may consist of recent history, tasks, frequent use of a certain folder etc depending on which app you right-click on. Image how quickly you can navigate to the required destination using the taskbar! Fantastic!

I do have a slight concern with jump lists and legacy software. For example, if a company use an older version of a particular type of software, and do not wish to upgrade to a newer version, do they still get these jump list features and can you still drag the legacy software icon onto the taskbar? I am interested to see how the Windows 7 team handle this issue, as I am sure many companies do not use the latest and greatest software out there.


I will be posting more information about the Windows 7 taskbar in the next few days.


Adventures in Filestream Data

Regular readers of this blog may have seen my previous article on using SQL Server 2008's new spatial data support to create the Beanie Tracker Web application. The application tracks the travels of a small white bear named Beanie, using Virtual Earth to show photos of him at various locations around the world. The idea for creating the application came from a whitepaper and a hands-on lab I created for the SQL Server marketing team at Microsoft.

As part of the same project for the SQL Server marketing team, I wrote a whitepaper on managing unstructured data in SQL Server 2008, which includes a short description of another new feature: FILESTREAM data. The idea behind FILESTREAM is to help solve a common problem in database solutions that need to store binary large object (BLOB) data such as images or multimedia objects. In the past, you've basically had two choices for storing BLOBs; you can store them in the database in varbinary columns, or you can store them on the file system and include a file path reference in the database so that applications can find them. The debate about which of these approaches is the best one is well documented, and usually boils down to a tradeoff between the performance advantages and data access flexibility of storing the data on the file system versus the manageability benefits of storing the data in the database. FILESTREAM support in SQL Server 2008 is designed to offer a best of both worlds approach in which the data is stored physically on the file system, but managed as if it were stored in the database. Additionally, a dual programming model enables the data to be accessed through Transact-SQL statements or through the Win32 file streaming API. FILESTREAM is supported in all editions of SQL Server 2008 (other than Compact Edition) - including SQL Server Express; and it's worth noting that because the data is physically stored in the file system, your binary data is not counted when restricting the size of a SQL Server Express database to 4 GB. To try all this for myself, I decided to modify the Beanie Tracker application to store the photos of Beanie as FILESTREAM data. This proved remarkably simple.

The first step was to configure FILESTREAM support in the SQL Server instance. To do this, you view the properties of the SQL Server instance in SQL Server Configuration Manager and set the appropriate values on the FILESTREAM tab as shown here:

EnableFS

Next you need to use sp_configure to enable FILESTREAM as shown here:

EXEC sp_configure filestream_access_level, 2
RECONFIGURE

Now that FILESTREAM is enabled, you can create a database that can store FILESTREAM data:

CREATE DATABASE BeanieTrackerFS
ON PRIMARY(NAME=BeanieData, FILENAME='c:\data\BeanieData.mdf'),
FILEGROUP BeanieFSFG CONTAINS FILESTREAM(NAME=BeanieFS, FILENAME='c:\data\beaniefs')
LOG ON(NAME=BeanieLog, FILENAME='c:\data\BeanieLog.ldf')

Note the inclusion of a filegroup that contains FILESTREAM data. The FILENAME parameter for this filegroup is the folder on the file system where the FILESTREAM data will be stored. If you take a look at this location in Windows Explorer, you'll see the files and folders that SQL Server uses to store the data as shown here (you need to give yourself permission to view the folder):

FSFolder

Now you can create a table with a varbinary(max) column for your FILESTREAM data. By specifying the FILESTREAM attribute for the column, you ensure that the data is stored in the FILESTREAM filegroup (and therefore on the file system) rather than in the database data pages:

CREATE TABLE Photos
([PhotoID] uniqueidentifier ROWGUIDCOL NOT NULL PRIMARY KEY,
[Description] nvarchar(200),
[Photo] varbinary(max) FILESTREAM NULL,
[Location] geography)

Note that you also need to include a ROWGUID column when storing FILESTREAM data. In the previous version of the Beanie Tracker database, the PhotoID column was an integer. I changed it to a uniqueidentifier column in this version.

After the table has been created, you can treat the FILESTREAM column just as if it were an ordinary varbinary column. For example, here's a Transact-SQL statement to insert a photo from an existing .jpg file:

INSERT INTO Photos
([PhotoID], [Description], Photo, Location)
VALUES
(Newid(),'Beanie in Paris',
(SELECT * FROM OPENROWSET(BULK N'C:\BeanieTracker\8.JPG', SINGLE_BLOB) As [Photo]),
geography::STPointFromText('POINT (2.328 48.8661)', 4326))

To access the data from a client application, you can use the Win32 streaming API to read the data from the file system, or you can use Transact-SQL. The client application does not need to do anything special for FILESTREAM data. Here's the Visual Basic .NET code in the Beanie Tracker Web application to retrieve the photo for a specific location:

Dim PhotoID As String = Request.QueryString("PhotoID")
'Connect to the database and bring back the image contents & MIME type for the specified picture
Using myConnection As New SqlConnection(ConfigurationManager.ConnectionStrings("BeanieTracker").ConnectionString)
Const SQL As String = "SELECT [Photo] FROM [Photos] WHERE [PhotoID] = @PhotoID"
Dim myCommand As New SqlCommand(SQL, myConnection)
myCommand.Parameters.AddWithValue("@PhotoID", PhotoID)
myConnection.Open()
Dim myReader As SqlDataReader = myCommand.ExecuteReader
If myReader.Read Then
  Response.ContentType = "image/jpeg"
  Response.BinaryWrite(myReader("Photo"))
End If
myReader.Close()
myConnection.Close()
End Using

The page containing this code is requested by the "tootip" in the Virtual Earth control when the user hovers the mouse over a pin on the map, so the appropriate image of Beanie is displayed in as shown here:





 

 

 

 

 

 

So all in all, it's pretty easy to use FILESTREAM data in a SQL Server database. In this example, I get the advantages of storing my images on the file system, freeing up valuable space in a SQL Server Express database and enabling access via Transact-SQL or the Win32 streaming API. However, I also get the manageability advantages of storing the data in a the database, so it's included when I perform a backup for example.

Thursday 8 January 2009

Introduction to SQL Data Services Part 1

Welcome to the first of a three part series of posts that aim to explore the new SQL Data Services (SDS) platform from Microsoft.

In this first post, we will explore the Azure services platform and where SDS fits into this exiting range of new cloud based services. we will then explore the benefits of utilising SDS within your applications before looking at the data model the service adopts and the protocols used to access your data. we will then finish by exploring the pre-requisites and developer SDK to set the scene for for subsequent posts. Lets get started!

The end of October saw developers from all over the world gather in Los Angeles for what was Microsoft’s biggest event of 2008 – The Professional Developer Conference.

It was here that Microsoft unveiled their newest wave of technologies to the developers in attendance and the millions watching worldwide. One of the biggest announcements of the whole event came on the very first day with the official unveiling of Microsoft’s Cloud Computing venture – The Azure Services platform.

Azure is a brand new hosted platform that provides scalable hosting for your Web based applications. At the heart of the platform is Windows Azure - the cloud based operating system that serves as the development, service hosting and service management environment for the Azure Services.



The Azure services are subsets of functionality that relate to the larger, on premise versions of the particular platforms. These subsets are designed to offer the same capabilities but serve them as service to be consumed by your applications. Currently, the platform offer functionality in the form of Live Services, .NET Services, SQL Services and SharePoint and Dynamic CRM Services.


The SQL Services platform and essentially extends SQL to the cloud with tweaks to make it scale over thousands of servers and allow you to store, retrieve, and manipulate any amount of data, from a few kilobytes to several terabytes.

The current beta level distribution stands at around 1200 servers geographically spread over 5 data centres!

But why should you use or even care about this exiting new service? You’re on premise database servers have served you well for years so how will this new model be of any benefit to you as a developer? here is a breakdown of three key areas that help demonstrate how cool this new platform really is.

Flexibility and Scale

A big push for this platform is the idea of flexibility and scale. When building your applications, infrastructure limits are no longer a problem as the data store will scale dynamically to any size.

The API is accessible through two standard based interfaces – Simple Object Access Protocol (SOAP) and Representation State Transfer (REST). Using these interfaces enables interaction with the services to be language and platform independent and you can access your data from any place at any time from any device that supports HTTP.

From a business perspective, the "pay as you grow" service model helps to keep your start-up costs low and ensures that you only pay for the storage you use, which results in a lower total cost of ownership (TCO).

Reliability and Security

One of the big concerns I had about the entire platform (not just SDS) was the security and reliability issues. Trusting Microsoft with large amounts of your corporate data is a big step for any company large or small.

To address this issue, Microsoft have taken the step of building these new services on top of the SQL Server and Windows Server 2008 technology stack to give you the same tried and tested performance you come to expect from these products. This, coupled with published service level agreements can help ensure enterprise-class performance and reliability.

Developer Agility

With the SOAP and REST protocols already implemented and other protocols (such as JavaScript Object Notation (JSON) and Atom Publishing Protocol (AtomPub) ) on the way it is very much a free for all development experience allowing you to use whatever tools and platform you feel comfortable with.

Where the platform really shines is in its flexible data model which doesn’t require any schemas. There is no need to create complex table column and relationship database structures. SDS supports familiar String, Decimal, Boolean, DateTime, and Binary property data types, and you can also store virtually any type of content as a binary large object (BLOB).

The Ace Model

SDS offers a simple (and really easy to understand) data model that gives you complete control over the way your data is expressed and related. There is no forced relational schema but instead the data model is presented as The ACE model – Authorities, Containers and Entities

Check out the following table that describes how each ACE element relates to each other and how they can be thought of if you are familiar with the normal SQL Server.

Business Logic Layer

Definition

Purpose

SQL Server Analogy

Authority

Set of containers

Groups containers for accounting, security, and
co-location

A SQL Server instance

Container

Set of entities

Groups entities for content and queries

An individual database

Entity

Scalar property bag

A unit of storage

An individual record



Just to put this into a bit of context. If we were building a system for a food supermarket we could have an authority for each store with a container for each product type (Fruit, Vegetables, Bakery etc) and our entities would become the individual records under each of these entities with each entity having a Name, Price and current items in stock for example.

Data Access

One of the major design goals of SDS was to enable communication from any programming environment. To enable this, SDS currently uses two protocols for communicating with the service – SOAP and REST

The SOAP protocol is familiar to many developers who consume Web services, is language and platform independent, and is available in any development environment that provides access to a SOAP stack. SDS is also very well supported by the Microsoft Visual Studio® tools. Developers typically use SOAP when developing for a Microsoft-based environment, especially in enterprise applications where security and interoperability are important.

REST is quite different to soap. It is a lightweight HTTP-based protocol that uses URIs to facilitate the exchange of data. This means you can use REST from any environment that has access to the HTTP stack and this includes the web browser

Prerequisites and SDK

To get started with development, you will need a CTP account with associated credentials to access your online solution. at the time of writing, there are two methods of obtaining an account. If you live in the US, you can visit the SQL Data Services Developer Center and click the “New Customer Sign up” on the left hand side and you will be taken through the process of signing up. Note that while this is free, you will need a credit card for identification and validation purposes. from what i have read elsewhere, this is done by charging $1 to your card.

If, however, you are like me and based outside the US you can sign up for the CTP of the Azure Services as a whole (which includes SDS) by visiting the Azure Services Platform site. this is also free and requires no credit card.

Once you have your account, you are all set to start your SDS adventures. To help you on your way, there is an SDK that provides a few tools and documentation links that are extremely useful. You can download a copy of the SDK from here.

The main feature of the SDK is the SDSExplorer tool. This tool provides a GUI for interacting with the data stored within your SDS account and executes operations using the REST protocol.

In the next post in the series, will will explore some code that will allow you to execute SDS commands programmatically using REST.

Wednesday 7 January 2009

SharePoint – Redirect to Secure Channel on Login

A couple of weeks ago, one of our IT guys was looking into hosting blog sites on Windows SharePoint Services 3.0 (using the CKS:EBE blogging engine – well worth a look). We started running through his requirements.

"We need read-only access for anonymous users." Fine.

"We need Forms auth accounts for contributors." No problem.

"We need to redirect users to a secure channel when they log in." OK, this is more challenging.

We can use Web application zones to support multiple access mechanisms, but how do we redirect users between zones when they sign in? This takes a little more trickery.

Let's start with a brief recap on SharePoint zones. Generally, there's a one-to-one mapping between a SharePoint Web application and an IIS Web site. The IIS Web site takes care of user authentication (for Windows auth at least), SSL certificate mapping, and so on. The SharePoint Web application takes care of authorization and serves up content. You can think of the IIS Web site as providing a specific route to the SharePoint Web application, through a specific URL, protocol (http vs. https), and authentication mechanism.

So, what happens if we want to provide more than one route to our SharePoint Web application? In the case of our blog site, we need to provide at least two routes – an http channel for anonymous access, and an https channel for registered users. In other words, we need to map our SharePoint Web application to at least two IIS Web sites. This is where zones come in – when you extend a SharePoint Web application to a new zone, you are effectively mapping it to an additional IIS Web site (with a corresponding configuration file).

In our case, we actually need three zones:


  • A default zone that uses Windows authentication to provide search support (more on this later).
  • An extranet zone to provide secure, forms-authenticated access to registered users.
  • An internet zone to provide read-only access to anonymous users.




So – why the redundant default zone? Because the SharePoint 2007 search service can't use forms authentication for search crawls. If you want to use forms authentication with SharePoint, you must still configure at least one zone for Windows authentication. Furthermore, the Windows authentication zone must be at the top of the list – if you configure the default zone for forms authentication, the search service will try this first and the indexing process will fail. In short, always configure your default zone for Windows authentication.

Now, let's get down to details. I don't want to reinvent the wheel here, so I'm going to assume you can create and configure the default zone and the extranet zone. If you're looking for walkthroughs on zones and forms authentication, the SharePoint Products and Technologies Team Blog has a good post here. In this post, I want to focus on how to redirect users between zones when they sign in.

So, we've got a default zone configured for Windows authentication. Our search service uses this zone to index our blog sites. We've also got an extranet zone configured for forms authentication. Our contributors use this zone to sign in over a secure channel and post content. Next, we need to create an internet zone that:


  1. Allows anonymous users to read blog entries.
  2. Redirects users to the extranet zone when they click Sign In.

Number 1 is straightforward. Number 2 gave me a headache. I looked at various ways of redirecting users on sign in. I tried customising the Welcome.ascx control in the Control Templates folder, but I wouldn't recommend opening that particular can of worms. As it turns out, all it requires is a minor change to the configuration file. First, here's a high level overview of how we configured our internet zone:

  1. Extend the Web application to a new zone, with a host header of http://www.blogs.contoso.com/.
  2. Configure the policy settings for the new zone to deny write permissions to unauthenticated users.
  3. In IIS, enable anonymous access and Windows authentication (a temporary measure) for the Web site that corresponds to the internet zone.
  4. Browse to the site collection using the internet zone URL, sign in, and enable anonymous access at the site collection level.
  5. Go back to IIS, and clear all authentication types except anonymous access for the site that corresponds to the internet zone.
Now for the special move. Open the Web.config file for the internet zone Web site and locate the authentication element. It should resemble this:

<authentication mode="Windows" />

Change the authentication element to use forms authentication – but don’t add the connection strings, role manager details, membership provider details and so on. Set the login URL for the authentication element to point to the login page for the extranet zone:

<authentication mode="Forms">
<forms loginUrl="https://extranet.blogs.contoso.com/_layouts/login.aspx" />

</authentication>

This provides the behaviour we're looking for. If you access the site anonymously and click Sign In, you’re redirected to the secure URL and authenticated against the membership provider. You’re then redirected back to the same relative URL, but in the secure zone – e.g. if you were browsing http://www.blogs.contoso.com/jasonl, you click sign in and authenticate yourself, and you’re redirected back to https://extranet.blogs.contoso.com/jasonl. Same page, different access mechanism.

How does it work? When the Sign In control redirects you to the login page, it includes the site-relative URL of the page you were visiting as a querystring. When you provide valid credentials, the login page will attempt to send you back to the page you came from – however, because it only has the site-relative URL, it has no idea which zone you came from. As a result, you're redirected to the correct page, but via the secure extranet zone rather than the original internet zone. The end result is you can only access the site anonymously over http, and you can only access the site as an authenticated user over https – perfect.