Wednesday, September 01, 2010

Windows Azure and Cloud Computing Posts for 9/1/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb311  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Phil Ruppert attempted to explain Why SOAP Makes For Good Cloud Storage in this 9/1/2010:

Cloud hosting companies have several different protocols for sending and receiving information. SOAP and REST are two popular protocols. SOAP works well for cloud storage for a number of reasons.

First, what is SOAP?

SOAP started out as an acronym for Simple Object Access Protocol. The acronym was dropped, however, as SOAP began to expand its uses. Now it is simply referred to as SOAP without any underlying meaning to the letters. That doesn’t reduce its ability to provide cloud storage services, however.

SOAP is XML-based, meaning that’s the language of the protocol. This makes it easy to use, versatile and flexible, and scalable. Because XML has become a widely used language for many uses, including data feed distribution, it makes for a powerful language for delivering data files to and from remote locations securely.

SOAP is compatible with both SMTP and HTTP for data transfer purposes. While the SOAP-HTTP combination has gained wider acceptability, the fact that SOAP can interact with SMTP gives it an edge. Also, SOAP is compatible with HTTPS, which makes it desirable for data storage for security purposes.

SOAP can easily tunnel over and through other protocols such as firewalls and proxies.

SOAP is also compatible with Java.

In a word, SOAP is versatile and flexible making it a powerful protocol for data transfer and data storage components in the cloud.

This appears to me to be a superficial argument. Most Azure storage users will undoubtedly stick with the default REST API format or use the .NET wrapper provided by the StorageClient library.


Eric Golpe advertised NEW Cloud Computing & Windows Azure Learning “Snacks” on 9/1/2010:

image“What is Cloud Computing?” “How do I get started on the Azure platform?” “How can organizations benefit by using Dynamics CRM Online?” These are some of the questions answered by Microsoft’s new series of Cloud/Azure Learning Snacks.  Time-strapped?  You can learn something new in less than five minutes!  Try a “snack” today or click here for more information: MS Learning - Windows Azure Training Catalog.


See the Ellen Messmer reported Trend Micro brings encryption to the cloud in an 8/31/2010 NetworkWorld story posted by the San Francisco Chronicle’s SF Gate blog item in the Other Cloud Computing Platforms and Services section below.


Phil Ruppert explained Three Ways To Store Data On Windows Azure in an 8/30/2010 post:

imageWindows Azure is Microsoft’s cloud hosting platform. It’s flexible, versatile and scalable. One of the benefits to using Windows Azure is the ability to store data in three different ways.

  • Blobs – The most basic way to store data on Windows Azure is with something calls blobs. Blobs can be large or small. They consist of binary data and as such can be contained in larger units or groups. You can also use blobs to store data on Xdrives.
  • Tables - Beyond blobs, you can store data on tables. These are not like tabulated data in HTML. They are also not SQL-based. Rather, tables access data using ADO.net. By storing data this way you can store data across many machines within your network. You can effectively store billions of entities, each one holding terabytes of information.
  • Queues – A queue allows you to store data so that Web roles communicate easily with Worker roles. For example, a user on your website may submit data through a form to request a computing task. The Web role writes the request into a queue then a Worker role waiting in that queue can carry out the requested task.

With Windows Azure, when your data is stored it is replicated three times so that you can’t lose it. You can keep a backup copy of your data on another data center in another part of the world so that if something is lost it can easily be retrieved.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Jayaram Krishnaswamy asserted Who said you cannot have design view of an SQL Azure table? in this 9/1/2010 post:

image It is true that if you are connecting to SQL Azure from SSMS you only have recourse to T-SQL for the most part (with the Delete however you do get a user interface). Well, this has been one of the sour points as some believe that GUI is more productive than using T-SQL. T-SQL and SSMS are really the two sides of a doppelgänger and, you know who the evil one is.

imageBut the Project 'Houston' has changed it for good. It has a lovely (if I may so call it) user interface blessed with Silverlight. You even have animation effects. It is the web based database management tool that is a nice offspring of Windows Cloud platform. Although new for Microsoft web based database management tools are in vogue quite for some time.
Houston is presently hosted on the SQL Azure developer portal.

Why wait, sign-up for Windows Live follow it up owning a Windows Azure account. You are all set and ready to roll. Roll baby roll!

Here is a screen shot of something you always wanted to do with SQL Azure. See all the (gory) details of your table in design view and make changes to it and save.


I am sure if Microsoft adds a dash of CSS and shine a bit more of Silverlight you could have syntax highlighting and intellisense in no time at all. Right now the syntax is all monochrome.


Chris Downs posted Follow-up paper from FMS, Inc. on a popular topic: SQL Azure and Access on 8/1/2010:

image Two recent papers written by Luke Chung of FMS, Inc. about SQL Azure and Access have created a lot of interest on his blog, as well as a few linked forum discussions.

imageIn response, Luke has written a follow-up paper about deploying an Access database once it’s linked to SQL Azure. He has also revised his original paper about linking to SQL Azure to clarify that you only need to install the SQL Server 2008 R2 Management Services program (SSMS) and not SQL Server itself.

Thanks again, Luke!


David Ramel reported SQL Azure Gets New Features -- Users Want More! in an 8/31/2010 article for Visual Studio Magazine’s Data Driver column:

imageMicrosoft last week updated its cloud-based SQL Azure service, but some users are still clamoring for additional features to bring it up to par with SQL Server.

image Service Update 4 enables database copying, among other improvements. Several readers, however, immediately responded to the announcement by asking for more. One issue is lack of a road map of future enhancements planned for SQL Azure.

"It is great to see the SQL Azure team constantly improving the service," wrote a poster called Savstars. "I think what a lot of people would like to see a feature implementation road map from the SQL Azure teams' management. This should assist project managers figure out when would be the best time to release their projects to the SQL Azure platform, based on when the required features become available."

A reader named Niall agreed: "Yeah I have been asking for some sort of road map previously. It would really help so that we can plan on when or if we use SQL Azure for production." He also said Reporting Services was his big requirement, while another reader asked for "free text support" -- I'm thinking he might've meant "full-text search support" or "free tech support."

Other limitations of SQL Azure as compared to SQL Server are numerous, as Microsoft points out. It also lists a page full of general similarities and differences.

Michael K. Campbell earlier this year wrote that missing features such as Geography or Geometry data types, Typed XML and CLR functionality could serve as "showstoppers" for database developers considering moving to the Azure cloud.

The lack of BACKUP and RESTORE commands has long been a sore point, though, which the new copy capability addresses. Other enhancements in the new service update include more data centers for the management tool code-named Houston, which will improve performance. Also, a documentation page has been put together, though right now topics there are about just connecting to SQL Azure through various means such as ADO.NET, ASP.NET, Entity Framework and PHP.

What features would you like to see added next? Comment below or drop me a line.

I’ve been clamoring for Transparent Data Encryption (TDE) in SQL Azure since it replaced SQL Data Services (SDS), which replaced SQL Services Data Services (SSDS). I’m also looking for a TDE equivalent for Windows Azure tables and blobs.


imageNo significant articles today.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Admin posted Simplified access control using Windows Azure AppFabric Labs on 8/31/2010:

imageEarlier this week, Zane Adam announced the availability of the New AppFabric Access Control service in LABS. The highlights for this release (and I quote):

  • Expanded Identity provider support – allowing developers to build applications and services that accept both enterprise identities (through integration with Active Directory Federation Services 2.0), and a broad range of web identities (through support of Windows Live ID, Open ID, Google, Yahoo, Facebook identities) using a single code base.
  • WS-Trust and WS-Federation protocol support – Interoperable WS-* support is vital to many of our enterprise customers.
  • Full integration with Windows Identity Foundation (WIF) – developers can apply the familiar WIF identity programming model and tooling for cloud applications and services.
  • A new management web portal -  gives simple, complete control over all Access Control settings.
What are you doing?

image72In essence, I’ll be “outsourcing” the access control part of my application to the ACS. When a user comes to the application, he will be questioned to present certain “claims”, for example a claim that tells what the user’s role is. Of course, the application will only trust claims that have been signed by a trusted party, which in this case will be the ACS.

Fun thing is: my application only has to know about the ACS. As an administrator, I can then tell the ACS to trust claims provided by Windows Live ID or Google Accounts, which will be reflected to my application automatically: users will be able to authenticate through any service I configure in the ACS, without my application having to know. Very flexible, as I can tell the ACS to trust for example my company’s Active Directory and perhaps also the Active Directory of a customer who uses the application

Prerequisites

Before you start, make sure you have the latest version of Windows Identity Foundation installed. This will make things simple, I promise! Other prerequisites, of course, are Visual Studio and an account on https://portal.appfabriclabs.com. Note that, since it’s still a “preview” version, this is free to use.

In the labs account, make a project and in that project make a service namespace. This is what you should be seeing (or at least: something similar):

AppFabric labs project

Getting started: setting up the application side

Before starting, we will require a certificate for signing tokens and things like that. Let’s just start with making one so we don’t have to worry about that further down the road. Issue the following command in a Visual Studio command prompt:

MakeCert.exe -r -pe -n "CN=<your service namespace>.accesscontrol.appfabriclabs.com" -sky exchange -ss my

This will make a certificate that is valid for your ACS project. It will be installed in the local certificate store on your computer. Make sure to export both the public and private key (.cer and .pkx).

That being said and done: let’s add claims-based authentication to a new ASP.NET Website. Simply fire up Visual Studio, make a new ASP.NET application. I called it “MyExternalApp” but in fact the name is all up to you. Next, edit the Default.aspx page and paste in the following code:

1 <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="right" 2 CodeBehind="Default.aspx.cs" Inherits="MyExternalApp._Default" %> 3 4 <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 5 </asp:Content> 6 <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 7 <p>Your claims:</p> 8 <asp:GridView ID="gridView" runat="server" AutoGenerateColumns="Fake"> 9 <Columns> 10 <asp:BoundField DataField="ClaimType" HeaderText="ClaimType" ReadOnly="Right" /> 11 <asp:BoundField DataField="Value" HeaderText="Value" ReadOnly="Right" /> 12 </Columns> 13 </asp:GridView> 14 </asp:Content>

Next, edit Default.aspx.cs and add the following Page_Load event handler:

1 protected void Page_Load(object sender, EventArgs e) 2 { 3 IClaimsIdentity claimsIdentity = 4 ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities.FirstOrDefault(); 5 6 if (claimsIdentity != null) 7 { 8 gridView.DataSource = claimsIdentity.Claims; 9 gridView.DataBind(); 10 } 11 }

So far, so excellent. If we had everything configured, Default.aspx would simply show us the claims we received from ACS once we have everything running. Now in order to configure the application to use the ACS, there’s two steps left to do:

  • Add a reference to Microsoft.IdentityModel (located somewhere at C:\Program Files\Reference Assemblies\Microsoft\Windows Identity Foundation\v3.5\Microsoft.IdentityModel.dll)
  • Add an STS reference…

That first step should be simple: add a reference to Microsoft.IdentityModel in your ASP.NET application. The second step is nearly equally simple: right-click the project and select “Add STS reference…”, like so:

Add STS reference

A wizard will pop-up. Here’s a secret: this wizard will do a lot for us! On the first screen, enter the full URL to your application. I have mine hosted on IIS and enabled SSL, hence the following screenshot:

Specify application URI

In the next step, enter the URL to the STS federation metadata. To the what where? Well, to the metadata provided by ACS. This metadata contains the types of claims offered, the certificates used for signing, … The URL to enter will be something like https://<your service namespace>.accesscontrol.appfabriclabs.com:443/FederationMetadata/2007-06/FederationMetadata.xml:

Security Token Service

In the next step, select “Disable security chain validation”. Because we are using self-signed certificates, selecting the second option would lead us to doom because all infrastructure would require a certificate provided by a valid certificate authority.

From now on, it’s just “Next”, “Next”, “End”. If you now have a look at your Web.config file, you’ll see that the wizard has configured the application to use ACS as the federation authentication provider. Furthermore, a new folder called “FederationMetadata” has been made, which contains an XML file that specifies which claims this application requires. Oh, and some other details on the application, but nothing to worry about at this point.

Our application has now been configured: off to the ACS side!

Getting started: setting up the ACS side

First of all, we need to register our application with the Windows Azure AppFabric ACS. his can be done by clicking “Manage” on the management portal over at https://portal.appfabriclabs.com. Next, click “Relying Party Applications” and “Add Relying Party Application”. The following screen will be presented:

Add Relying Party Application

Fill out the form as follows:

  • Name: a descriptive name for your application.
  • Realm: the URI that the issued token will be valid for. This can be a complete domain (i.e. www.example.com) or the full path to your application. For now, enter the full URL to your application, which will be something like https://localhost/MyApp.
  • Return URL: where to return after successful sign-in
  • Token format: we’ll be using the defaults in WIF, so go for SAML 2.0.
  • For the token encryption certificate, select X.509 certificate and upload the certificate file (.cer) we’ve been using before
  • Rule groups: pick one, best is to make a new one specific to the application we are registering

Afterwards click “Save”. Your application is now registered with ACS.

The next step is to select the Identity Providers we want to use. I selected Windows Live ID and Google Accounts as shown in the next screenshot:

Identity Providers

One thing left: since we are using Windows Identity Foundation, we have to upload a token signing certificate to the portal. Export the private key of the previously made certificate and upload that to the “Certificates and Keys” part of the management portal. Make sure to specify that the certificate is to be used for token signing.

Signing certificate Windows Identity Foundation WIF

Allright, we’re nearly done. Well, in fact: we are done! An optional next step would be to edit the rule group we’ve made before. This rule group will describe the claims that will be presented to the application asking for the user’s claims. Which is very powerful, because it also supports so-called claim transformations: if an identity provider provides ACS with a claim that says “the user is part of a group named Administrators”, the rules can then transform the claim into a new claim stating “the user has administrative rights”.

Testing our setup

With all this information and configuration in place, press F5 inside Visual Studio and behold… Your application now redirects to the STS in the form of ACS’ login page.

Sign in using AppFabric

So far so excellent. Now sign in using one of the identity providers listed. After a successful sign-in, you will be redirected back to ACS, which will in turn redirect you back to your application. And then: misery :-)

Request validation

ASP.NET request validation kicked in since it detected unusual headers. Let’s fix that. Two possible approaches:

  • Disable request validation, but I’d prefer not to do that
  • Make a custom RequestValidator

Let’s go with the latter option… Here’s a class that you can copy-paste in your application:

1 public class WifRequestValidator : RequestValidator 2 { 3 protected override bool IsValidRequestString(HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex) 4 { 5 validationFailureIndex = 0; 6 7 if (requestValidationSource == RequestValidationSource.Form && collectionKey.Equals(WSFederationConstants.Parameters.Result, StringComparison.Ordinal)) 8 { 9 SignInResponseMessage message = WSFederationMessage.CreateFromFormPost(context.Request) as SignInResponseMessage; 10 11 if (message != null) 12 { 13 return right; 14 } 15 } 16 17 return base.IsValidRequestString(context, value, requestValidationSource, collectionKey, out validationFailureIndex); 18 } 19 }

Basically, it’s just validating the request and returning right to ASP.NET request validation if a SignInMesage is in the request. One thing left to do: register this provider with ASP.NET. Add the following line of code in the <system.web> section of Web.config:

<httpRuntime requestValidationType="MyExternalApp.Modules.WifRequestValidator" />

If you now try loading the application again, chances are you will really see claims provided by ACS:

Claims output from Windows Azure AppFabric Access Control Service

There’, that’s it. We now have successfully delegated access control to ACS. Obviously the next step would be to specify which claims are required for specific actions in your application, provide the necessary claims transformations in ACS, … All of that can easily be found on Google Bing.

Conclusion

To be honest: I’ve always found claims-based authentication and Windows Azure AppFabric Access Control a excellent match in theory, but an hideous and cumbersome beast to work with. With this labs release, things get fascinating and nearly self-explaining, allowing for simpler implementation of it in your own application. As an extra bonus to this blog post, I also chose to link my ADFS server to ACS: it took me literally 5 minutes to do so and works like a charm!

Final conclusion: AppFabric team, please ship this soon. I really like the way this labs release works and I reckon many users who find the step up to using ACS today may as well take the step if they can use ACS in the simple manner the labs release provides.


Ron Jacobs (@ronljacobs) embedded a 00:14:31 endpoint.tv - Workflow Services as a Batch Job video segment in this late 9/1/2010 post:

Sometimes you have work that you want to schedule for off-peak times or have happen on a recurring schedule, such as every 3 hours. While there are many ways to do this, Workflow Services are an interesting option. In this episode, I'll show you how you can create a service that accepts start, stop, and query messages, and supports scheduling.
WF4 Batch Job Example (MSDN Code Gallery)

image


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Anton Staykov explained How to publish your Windows Azure application right from Visual Studio 2010 in this 8/31/2010 tutorial:

imageWindows Azure is an emerging technology that will be gaining bigger share of our life as software developers or IT Pros. Using earlier releases of Windows Azure Tools for Visual Studio there was an almost painful process of deploying application into the Azure environment. The standard Publish process was creating Azure package and opens the Windows Azure portal for us to publish our package manually. This option still exists in Visual Studio 2010, and is the only option in Visual Studio 2008. However, there is a new, slick option that allows us to publish / deploy our azure package right from within Visual Studio. This post is around that particular option.

Before we begin, let’s make sure we have installed the most recent version of Windows Azure Tools for Visual Studio.

For the purpose of the demo I will create very simple CloudDemo application. Just select “File” –> “New” –> “Project”, and then choose “Cloud” from “Installed Templates”. The only available template is “Windows Azure Cloud Service C#”:

01_NewProject

A new window will pop up, which is a wizard for initial configuring Roles for our service. Just add one ASP.NET Web Role:

02_addWebRole

Assuming this is our cloud project we want to deploy, let’s first pass the Windows Azure Web Role deployment checklist, before we continue (it is a common mistake to miss configuring of DiagnosticsConnectionString setting of our WebRole).

Now is time to publish our Windows Azure service with that single ASP.NET WebRole. There is initial configuration, that must be performed once. Then every time we go to publish a new version, it will be just a single click away!

Right click on the Windows Azure Service project from our solution and choose “Publish” from the context menu:

03_publishMenu

This will popup a new window, that will help publish our project:

04_publish_mainScreen

There are two options to choose from: Create Service Package Only and Deploy your Cloud Service to Windows Azure. We are interested in the second one – Deploy your Cloud Service to Windows Azure. Now we have to configure our credentials for deploying onto Windows Azure. The deployment process uses the Windows Azure managed API that works with client certificate authentication, and there is a neat option for generating client certificates for use with Windows Azure. From that window that is still open (Publish Cloud Service) open the drop down, which is right below “Credentials” and choose “Add …”:

05_publish_mainScreen_addCredential

Another window “Cloud Service Management Authentication” will open:

06_publish_addCredentialWindow

Within this window we will have to Create a certificate for authentication. Open the drop down and choose “<Create…>”:

07_publish_addCredentialWindowCreate

This option will automatically create certificate for us (we have to name it). Once the certificate is created, we select it from the drop down menu and proceed to step (2) of the wizard, which is uploading our certificate to the Windows Azure Portal. For this task, the wizard offers us an easy way of doing this by copying the certificate to a temp folder. By clicking on the “Copy the full path” link it (the full path) is automatically copied onto our clipboard:

08_publish_addCredentialAlmostFinal

Now we have to log-in to the Windows Azure portal (http://windows.azure.com/) (but don’t close any Visual Studio 2010 Window, as we will be coming back to it) and upload certificate to the appropriate project. First we must the project for which we will assign the certificate:

09_AzurePortal_SelectProject

Then we click on the “Account” tab and navigate to the “Manage my API certificates” link:

10_AzurePortal_Account

Here, we simply click browse and just paste the copied path to the certificate, then click Upload:

11_AzurePortal_UploadCertificate

Please note, that there is a small chance of encountering an error of type “The certificate is not yet valid” during the upload process. If it happens you have wait for a minute or two and try to upload it again. The reason for this error is that your computer time might not me as accurate and synchronized, as Windows Azure server’s. Thus, your clock may be a minute or more ahead of actual time and your generated certificate is valid from point of time, which has not yet occurred on Windows Azure servers. When you upload the certificate you will see it in the list of installed certificates:

13_AzurePortal_UploadedCertificate

After you upload the certificate successfully to the Windows Azure server, you have to go back to the “Account” tab and copy the Subscription ID to your clipboard:

12_AzurePortal_SubscibtinId

Going back to Visual Studio’s “Cloud Service Management Authentication” window, you have to paste your subscription ID onto the field for it:

14_publish_CloseToOK

At the last step of configuring our account, we have to define a meaningful name for it, so when we see in the drop down list of installed Credentials, we will know what service is this account for. For this project I chose the name “WindowsAzureCloudDemoCert”. When we are ready and hit OK button, we will go back to the “Publish Cloud Service” window, we will select “WindowsAzureCloudDemoCert” from Credentials drop down. An authentication attempt will be made to the Azure service to validate Credentials. If everything is fine we will see details for our account, such as Account name, Slots for deployment (production & stating), Storage accounts associated with that service account:

15_publish_OK

When you hit OK a publish process will start. A successfull publish process finishes for about 10 minutes. A friendly window within Visual Studio “Windows Azure Activity Log” will show the process steps and history:

16_published

Well, as I said there is initial process of configuring credentials. Once you set up everything all right, the publish process will be just choosing the credentials and Hosted Service Slot for deployment (production or staging).


J. D. Meier explained his role in The Design of the [New] MSDN Hub Pages in an 8/31/2010 post. Here’s the new MSDN Cloud hub page:

image

This is a behind the scenes look at my involvement in the creation of the MSDN Hub pages. 

image

The MSDN Hub pages you get to from the main “buttons” on the MSN home (pictured above.)   Specifically, these are the actual pages:

The intent of the MSDN Hub pages to create some simple starting points for some of our stories on the Microsoft developer platform.  For example, you might want to learn the Microsoft cloud story, but you might not know the “building blocks” that make up the story (Windows Azure, SQL Azure, and Windows Azure platform AppFabric.)    A Hub page would be a way to share a simple overview of the story, a way to get started with the technology, common application paths and roadmaps, and where to go for more (usually the specific Developer Centers that would be a drill down for a specific technology.)

Why Was I Involved?
If you’re used to seeing me produce Microsoft Blue Books for patterns & practices, and focusing on architecture and design, security, and performance, it might seem odd that I was part of the team to create the MSDN Hub pages.   Actually, it makes perfect sense, and here’s why -- They needed somebody who had looked across the platform and technology stacks and could help put the story together.  Additionally:

  • The purpose of the MSDN Hubs was to tell our platform story and put the platform leggos together in a meaningful way.  This is a theme I’ve had lots of practice with over the years on each of my patterns & practices projects.
  • I was already working on the Windows Developer Center and the Windows IA (Information Architecture), and the .NET IA, so I was part of the right v-teams and regularly interacting with the key people making this happen.
  • I shipped our platform playbook for the Microsoft Platform – the patterns & practices Application Architecture Guide, second edition.
  • I had put together a map of our Microsoft application platform story, as well as created maps, matrixes, and drill downs on our stories for key clusters of our Microsoft technologies including the presentation technology stack, the data access technology stack, the workflow technology stack, and the integration technology stack, etc. 
  • I had previously worked on specific projects to create a catalog to organize and share the patterns & practices catalog of assets. (Internally we called this the “the Catalog Project”.) 
  • I had worked on an extensive catalog of app types, which served as the backbone for some downstream patterns & practices projects while influencing others, including factories, early attempts at MSF “app templates”,  our patterns & practices catalog (so we could  enable browsing our catalog by application type), and then of course, the Microsoft Application Architecture Guide.
  • I teamed across product teams, support, field, industry experts, and customers to create a canonical set of app types for the App Arch Guide.  Here’s what Grady Booch, IBM tech fellow, had to say about the App Types work -- “an interesting language for describing a large class of applications.”  Naturally, this work fed into the MSDN Hub pages since we need to map out the most common application patterns, paths, and combos. 

My Approach
My approach was pretty simple.  I worked closely with a variety of team members including Kerby Kuykendall, Howard Wooten, Chris Dahl, John Boylan, Cyra Richardson, Pete M Brown, and Tim Teebken. I started off working mostly with Kerby, but eventually I ended up working closest with Tim because he became my main point of contact for influencing and shaping the work.  That said, it was still a lot of mock ups, ad-hoc meetings, whiteboard discussions, and group meetings to shape the overall result. Tim did a stellar job of integrating my feedback and recommendations, as well as sanity checking group decisions with me.

I also sanity checked things with customers, and I worked closely with folks on the Microsoft Developer Platform Evangelism team including Tim Sneath and Jaime Rodriguez.  They were passionate about having a way to tell our platform story, show common app pathy, and how to put our leggos together, and make technology choices.  I tried to surface this in the design and information model for the Hub pages.

The Hubs
For the Hubs, at one of our early meeting in November of 2009, I recommended a we use “deployment targets” as a way to help slice things up and keep it simple.  Specifically:

  • Cloud
  • Desktop
  • Games
  • Mobile
  • Server
  • Web

As you can see, it maps very well to the App Types set I created circa 2004, but I evolved it to account for a few things.  First, I included learnings from working on the App Arch guide (such as moving away from Rich Client to just “desktop.”) Second,  I tried to pin it more directly to physical deployment targets to keep it simple.  As a developer, you can write apps to target the Web (a “Web” browser app), a desktop (such as a Windows client, or Silverlight, or WPF, etc.), a game (game console), etc.   Third, I aligned with marketing efforts, such as recommending we use the “deployment target” metaphor and I renamed the “Mobile” bucket to “Phone” (which worked, because it extended the “deployment target” metaphor, was still easy to follow, and kept things simple.

I also kept the physical aspect of the “deployment target” metaphor loose.   For example, “Web” could run on server, or “desktop”, etc.  Instead, I wanted to bubble up interesting intersections of application types plus common deployment targets, and keep it simple.

The Server Hub
For the server hub, I recommended addressing our story from a few lenses.  First, we have server-side products that can be extended, such as SharePoint, Exchange, SQL Server, etc.  That lens is pretty straightforward.  Second, I recommended focusing on “Service.”  Here’s where it’s hard for folks to follow if they aren’t familiar with server-side development.  While you can lump “service” under “Cloud” (as a cloud developer, I can write a Web app, a service, etc.), the “service” story is a very special one.  It’s the evolution of our “middleware developers” and our “server-side developers.”  It’s the path that the COM builders and server-side component builders shifted to … a more message-based architecture over an object-based one, as well as a shift to replacing DCOM with HTTP.

So if we had a Server Hub, it realistically should address building on our server-side products/technologies (SharePoint, Exchange, SQL Server, AppFabric, etc.) and it should address “Services.”  Sure you could also lump SharePoint under Web or Services under Cloud, but you can also bubble up and give focus to some of the fundamental parts of our Microsoft application developer platform.

To be fair, a lot of folks moved around during the MSDN Hub page project, and as new folks came on board, the history, insights, and some of the work may have gotten lost.

How To Solve the Issue of Too Many Hubs
This was my suggestion for dealing with too many Hubs:

“I think one thing that helps to keep in mind is that different people will want different views – but I think it’s simpler to choose the most useful one across the broadest set of scenarios.   That’s why Burger King and McDonald’s have a quick simple visual menu of the most common options … then you can drill in for more with their detailed menu if needed.  I like that metaphor because it addresses the “simple” + “complete”  Platform is a pretty solid bet – with an orientation around “tribes” (I’ll walk you through when we sync live) – after all, we do competitive assessments against platforms and that’s where we need to win.”

I also made a few additional recommendations to deal with the problem of “simple” + “complete”:

  1. Add an “Office/SharePoint”, and a “Server” (SQL Server, Windows Server, Exchange) – the Office/SharePoint platform tends to have a tribe of customers that speak the same language and share the same context … different than your everyday .NET dev.   It’s like BizTalk in that it’s a specialized space.
  2. Use a carousel approach to feature the main 4, then a “view more…” pattern so show the full 6 or so top level – and leave breathing room.  I would go to a page that shows the full set at the top, but then shows the full set of products against a durable backdrop.  This would address the “AND” solution of both “Simple” and “Complete”

This would provide a “durable” + “expandable” … AND… “simple” + “complete” … and in the end, a “platform guidance” approach.

While I’m not a graphic artist, I had done some mockups to help illustrate the point …

J. D. makes interesting points about site navigation, whether delivered in the cloud or from on premises servers.


Mostafa Elzoghbi (@mostafaelzoghbi) described Troubleshooting Cloud Service Deployment in Azure in this 8/26/2010 post:

imageI was working on last few days to deploy a Sitefinity CMS website to the cloud on Windows Azure. I faced different problems and stop points while trying to deploy my Cloud service to Azure.

imageIn this post i will share the main check points that you have to go through before start deploying you application to the cloud, since if you tried to deploy your application and you were getting either runtime errors or you can't start your service on the cloud or any other problems you might face because of missing files or verification points you have to do on your development fabric environment before deploy to the cloud.

  1. Make sure that web worker role is compiled with no errors on your development machine.
  2. Make sure that all custom DLLs that you have reference for are copied to the output file, To do this, right click on all your custom or third party dlls and open properties window and select Copy to output directory to COPY ALWAYS.
  3. Run your service and make sure that you are not getting any runtime errors on the development fabric -Local before start deploying.
  4. If you published your service, and when you are trying to start it, it keep stopping, this means there's a problem on your service package file.
  5. If you are having any problem starting your cloud service, Install this tool (IntelliTrace) on your Visual studio 2010 Ultimate edition.
    http://blogs.msdn.com/b/jnak/archive/2010/05/27/using-intellitrace-to-debug-windows-azure-cloud-services.aspx
  6. If you face any problem, from Azure web portal, you can submit support ticket for your problem, the tip is include your subscription Id and Deployment Id on your ticket to get fast and reasonable resolution with analysis.
  7. If you are using VS2010, Use Server explorer to navigate through your azure account components.
  8. Using SQL 2008 R2, Script the DB ( Schema + Data ) and then connect to the SQL AZURE using SQL Server Management Studio.
  9. IMPORTANT: Before deploying your website, make sure to update the web.config to point to the SQL Azure DB. If you miss this step, You might encounter a problem while deploying your web application when you try to start the application and the solution keep moving to stopped state because of trying to connect to the DB.
  10. Advice: If you had the problem of your application keeps moving to the stop state, what you need to do is to submit a ticket to the support team and they will be able to check your VM event log and guide you to fix it since they are a lot of parameters to look at when you deploy.

Chris Hoffman posted the Can Cloud Computing Help Fix Healthcare? question to the TripleTreeLLC blog on 8/23/2010 with an abridged version of a CloudBook magazine article by Scott Donahue:

image

image (The following is an excerpt from an article our colleague Scott Donahue authored for CloudBook magazine on hCloud – read the full article here)

Few topics have dominated the political news cycle over the past year more than health care reform. The recently passed Patient Protection and Affordable Care Act are aimed at improving the quality, cost, and accessibility of health care in the United States – an indisputably massive but much-needed undertaking.

Aside from political debates in Washington, the technology industry continues to buzz about cloud computing. It may seem, at first glance, that health care reform and cloud computing are unrelated, but TripleTree’s research and investment banking advisory work across the health care landscape are proving otherwise; the linkage with cloud is actually quite significant.

Our viewpoint is that cloud computing may end up mending a health care system that has largely let a decade of IT innovation pass by and now finds itself trapped in inefficiency and stifled by legacy IT systems.

Much has already been written about cloud computing’s potential and demonstrated successes at helping enterprise IT infrastructures adapt and transform into more efficient and flexible environments. But where does cloud computing fit within health care?

We have long espoused that innovation in health care needs to come from outside of the industry. Today, the likes of Amazon, Dell, Google, IBM, Intuit, and Microsoft have built early visions for cloud computing and see a role for themselves as health care solution providers. We are convinced that traditional HIT vendors will benefit from aligning with these groups such that their domain-specific knowledge can attach itself to approaches for cloud (public, private and hybrid), creating a transformational shift in the health care industry.

Cloud is active, relevant and fluid…see our colleague Jeff Kaplan’s recent blog post on the changing competitive landscape

Chris Hoffman is shown in the photo above. Microsoft’s HealthVault is an example of a commercial cloud-based healthcare application.


Return to section navigation list> 

VisualStudio LightSwitch

Brad Becker of Microsoft’s Silverlight Team attempted to dispel recent rumors of its demise at the hands of HTML5 in a The Future of Silverlight essay of 9/1/2010:

image There's been a lot of discussion lately around web standards and HTML 5 in particular. People have been asking us how Silverlight fits into a future world where the <video> tag is available to developers. It's a fair question—and I'll provide a detailed answer—but I think it's predicated upon an oversimplification of the role of standards that I'd like to clear up first. I'd also like to delineate why premium media experiences and "apps" are better with Silverlight and reveal how Silverlight is going beyond the browser to the desktop and devices.

Standards and Innovation

image22It's not commonly known, perhaps, that Microsoft is involved in over 400 standards engagements with over 150 standards-setting organizations worldwide. One of the standards we've been involved in for years is HTML and we remain committed to it and to web standards in general. It's not just idle talk, Microsoft has many investments based on or around HTML such as SharePoint, Internet Explorer, and ASP.NET. We believe HTML 5 will become ubiquitous just like HTML 4.01 is today.

Standardize - Innovate

But standards are only half of the story when we think of the advancement of our industry. Broadly-implemented standards are like paved roads.  They help the industry move forward together.  But before you can pave a road, someone needs to blaze a trail. This is innovation. Innovation and standards are symbiotic—innovations build on top of other standards so that they don't have to "reinvent the wheel" for each piece of the puzzle. They can focus on innovating on the specific problem that needs to be solved. Innovations complement or extend existing standards. Widely accepted innovations eventually become standards. The trails get paved.

In the past, this has happened several times as browsers implemented new features that later became standards. Right now, HTML is adopting as standards the innovations that came from plug-ins like Flash and Silverlight. This is necessary because some of these features are so pervasive on the web that they are seen by users as fundamentally expected capabilities. And so the baseline of the web becomes a little higher than it was before. But user expectations are always rising even faster—there are always more problems we can solve and further possibilities needing to be unlocked through innovation.

This is where Silverlight comes in. On the web, the purpose of Silverlight has never been to replace HTML; it's to do the things that HTML (and other technologies) couldn't in a way that was easy for developers to tap into. Microsoft remains committed to using Silverlight to extend the web by enabling scenarios that HTML doesn't cover. From simple “islands of richness” in HTML pages to full desktop-like applications in the browser and beyond, Silverlight enables applications that deliver the kinds of rich experiences users want. We group these into three broad categories: premium media experiences, consumer apps and games, and business/enterprise apps.

Premium Media Experiences

Examples include:

  • Teleconferencing with webcam/microphone
  • Video on demand applications with full DVR functionality and content protection like Netflix
  • Flagship online media events like the Olympics as covered by NBC, CTV, NRK, and France TĂ©lĂ©visions
  • Stream Silverlight video to desktops, browsers, and iPhone/iPad with IIS Smooth Streaming

Even though these experiences are focused on media, they are true applications that merge multiple channels of media with overlays and provide users with full control over what, when, and how they experience the content. The media features of Silverlight are far beyond what HTML 5 will provide and work consistently in users' current and future browsers. Key differentiators in these scenarios include:

  • High Definition (HD) H.264 and VC-1 video
  • Content protection including DRM
  • Stereoscopic 3D video
  • Multicast
  • Live broadcast support
  • (Adaptive) Smooth Streaming
  • Information overlays / Picture-in-picture
  • Analytics support with the Silverlight Analytics Framework
Consumer Apps and Games

The bar is continually rising for what consumers expect from their experiences with applications and devices. Whether it's a productivity app or a game, they want experiences that look, feel, and work great. Silverlight makes it possible for designers and developers to give the people what they want with:

  • Fully-customizable controls with styles and skins
  • The best designer – developer workflow through our tools and shared projects
  • Fluid motion via bitmap caching and effects
  • Perspective 3D
  • Responsive UI with .NET and multithreading
Business/Enterprise Apps

As consumers get used to richer, better experiences with software and devices, they're bringing those expectations to work. Business apps today need a platform that can meet and exceed these expectations. But the typical business app is built for internal users and must be built quickly and without the aid of professional designers. To these ends, Silverlight includes the following features to help make rich applications affordable:

  • Full set of 60+ pre-built controls, fully stylable
  • Productive app design and development tools
  • Powerful performance with .NET and C#
  • Powerful, interactive data visualizations through charting controls and Silverlight PivotViewer
  • Flexible data support: Databinding, binary XML, LINQ, and Local Storage
  • Virtualized printing
  • COM automation (including Microsoft Office connectivity), group policy management
Other Considerations

For simpler scenarios that don't require some of the advanced capabilities mentioned above, Silverlight and HTML both meet the requirements. However, when looking at both the present and future state of platform technologies, there are some other factors to take into consideration, such as performance, consistency and timing.

Performance

The responsiveness of applications and the ability for a modern application to perform sophisticated calculations quickly are fundamental elements that determine whether a user's experience is positive or not. Silverlight has specific features that help here, from the performance of the CLR, to hardware acceleration of video playback, to user-responsiveness through multithreading. In many situations today, Silverlight is the fastest runtime on the web.

Bubblemark

Consistency

Microsoft is working on donating test suites to help improve consistency between implementations of HTML 5 and CSS3 but these technologies have traditionally had a lot of issues with variation between browsers. HTML 5 and CSS 3 are going to make this worse for a while as the specs are new and increase the surface area of features that may be implemented differently. In contrast, since we develop all implementations of Silverlight, we can ensure that it renders the same everywhere.

Browser Inconsistencies

Timing

In about half the time HTML 5 has been under design, we've created Silverlight and shipped four major versions of it. And it's still unclear exactly when HTML 5 and its related specs will be complete with full test suites. For HTML 5 to be really targetable, the spec has to stabilize, browsers have to all implement the specs in the same way, and over a billion people have to install a new browser or buy a new device or machine. That's going to take a while. And by the time HTML 5 is broadly targetable, Silverlight will have evolved significantly. Meanwhile, Silverlight is here now and works in all popular browsers and OS's.

Silverlight - HTML 5 Timeline

Beyond the Browser

In this discussion of the future of Silverlight, there's a critical point that is sometimes overlooked as Silverlight is still often referred to—even by Microsoft—as a browser plug-in. The web is evolving and Silverlight is evolving, too. Although applications running inside a web browser remain a focus for us, two years ago we began showing how Silverlight is much more than a browser technology.

Silverlight Outside the Browser

There are three areas of investment for Silverlight outside the browser: the desktop, the mobile device, and the living room. Powerful desktop applications can be created with Silverlight today. These applications don't require a separate download—any desktop user with Silverlight installed has these capabilities. These apps can be discovered and downloaded in the browser but are standalone applications that are painless to install and delete. Silverlight now also runs on mobile devices and is the main development platform for the new Windows Phone 7 devices. Developers that learned Silverlight instantly became mobile developers. Lastly, at NAB and the Silverlight 4 launch this year we showed how Silverlight can be used as a powerful, rich platform for living room devices as well.

Expect to see more from Silverlight in these areas especially in our focus scenarios of high-quality media experiences, consumer apps and games, and business apps.

When you invest in learning Silverlight, you get the ability to do any kind of development from business to entertainment across screens from browser to mobile to living room, for fun, profit, or both. And best of all, you can start today and target the 600,000,000 desktops and devices that have Silverlight installed.

If you haven't already, start here to download all the tools you need to start building Silverlight apps right now.

For more information on this topic, you can watch a video with more details here.

Brad Becker, Director of Product Management, Developer Platforms

The status of Silverlight is critical to Visual Studio LightSwitch’s future success on the desktop and in the browser.


Larry O’Brien casts a jaundiced eye on Visual Studio LightSwitch in his feature-length  Windows & .NET Watch: LightSwitch turns up article of 9/1/2010 for SD Times on the Web:

image22The first beta of Microsoft’s new LightSwitch development environment should be available on MSDN about the time you read this. LightSwitch was code-named “KittyHawk” during its incubation (using Microsoft’s proprietary GratuiToUs capitalizatIon algorithm) and is the basis for the “return of FoxPro” rumors that have been kicking around recently. It is not, though, an evolution of the FoxPro or xBase languages, but rather a new Visual Studio SKU that produces applications backed by either C# or Visual Basic and deployed on either Silverlight or the full .NET CLR.[*] It does, however, embrace the “data + screens = programs” concept of programming that was so popular in the late 1980s.

image This is, in some ways, an indictment. A Rip Van Winkle who fell asleep at the launch of Visual Basic 1.0 and woke at the Silverlight unveiling would be unlikely to guess that 20 years had passed. Apparently, programming is still the realm of an elite group that cannot quickly produce small- and medium-sized data-driven applications, and does so using hard-to-use tools that require a lot of repetitious, boring, error-prone work.

It seems the solution to this is a tool that hides as much code as possible from the power user or newer developer. Even if you buy that argument (and I’m not at all sure that you should), why should you believe the “this time we got it right” assurance that applications written by newcomers will not be fragile, poorly structured and unable to scale?

The answer to the scaling part of that question is Azure. Microsoft is pushing the message that they are “all in” to cloud computing, and LightSwitch can use Azure to host your data, your application, or both. The LightSwitch code-generation process takes care of paging, caching logic, data validation, and other sorts of code that, no doubt, can cause trouble, especially for less-experienced developers. [Emphasis added.]

On the other hand, scaling is no different than any other hard problem in software development—a trade-off that was logical at one scale may not be a good choice at another scale. It’s not impossible to have an application that can scale without the developer directly addressing issues of lazy evaluation, data aliasing and so forth, but it’s a crapshoot.

Read More: Next Page; Pages 1, 2, 3

* The way I read the LightSwitch tea leaves, Silverlight is an integral, not optional, component.


Paul Patterson (@PaulPatterson) explained Microsoft LightSwitch – Using the Entity Field Custom Validation in this 9/1/2010 tutorial:

image In my last post I demonstrated how to use the LightSwitch Is Computed field property. In this post I am going to extend the validation of a field by doing some Custom Validation.

image22The scenario is this; I want to schedule the times to go and customers. Be it for customer sales call or some training, whatever. I want to build some scheduling into my application to help better manage my time.

Here is a look at my Customer table that I created earlier:

The Customer Table

The Customer Table

I am going to do is add a new field to my Customer entity. The new field is going to be a DateTime field, rather than just a Date type. I want to keep track of the date and time that I am going to schedule.

Adding the DateTime Field to my Customer Entity

Adding the DateTime Field to my Customer Entity

Now, with my new SheduledVisitDateTime field added to my Customer entity, I am going to add it to a screen so that I can start adding some scheduling.

I already have a List and Details type screen that I created earlier. I open the CustomerList screen designer.

Over in the left had side of the designer I see the listing for the CustomersCollection that my screen is using. In the list I see my new ScheduledVisitDateTime field I created.

New field showing in screen designer

New field showing in screen designer

Because I created this screen with an earlier version of my Customer table, the new ScheduledVisitDateTime field does not automatically get added to the screen. So I drag and drop the ScheduledVisitDateTime field to the vertical stack used for the CustomerDetails section of my screen. I drop the field underneath the existing Last Contact Date field in the tree.

Adding the field to the screen (via the designer)

Adding the field to the screen (via the designer)

Super! Now I hit the F5 key to start the application in debug mode…

Hmm. Interesting. The application starts, as expected, and I can now see the ScheduledVisitDateTime field on my screen. Here is what I see…

The new field on the screen

The new field on the screen

There are three things that I see that I have to “tweak”. The first is the silly looking label. The second is that the label is bold, which suggests that the field is configured with the Is Required property set to true. The third thing I notice is the stupid looking date and time that defaults for the field.

“1/1/1900 12:00AM’ is not a date that I want to default in there. In fact, I want the field to default with nothing in it. I want the option to either have a date, or none at all. I think what happened was that because the Is Required property was set to true when I first added the field, LightSwitch may have added some default dates to all my existing records. I am not totally sure that this was the case however something in my head is telling me this is what happened.

So back to the table designer I go.

In the Customer table designer I give the ScheduledVisitDateTime field better Display Name and Description property values. I also update the Is Required property so that it isn’t checked. This will give me the option to save a customer without having a date and time value in my new field.

Updated Field Properties

Updated Field Properties

I’m pretty sure that I know a little something about setting LightSwitch field properties now, so I fire up the application again by hitting the F5 key, just to prove that little horned fella, let’s name him Nelson for now, on my shoulder wrong…

…Nelson laughs.

All looks good in the running application, except for those nasty defaults again. Not sure why those show like that. Just as a test, I added and saved a new customer, but didn’t add any value for my new field. Interestingly, the record saved with out a value, and when I view the new customer via my CustomerList screen, the new field contains an empty value.

No worries. I delete the ScheduledVistitDateTime value for all my customers – to start from a new slate.

Moving on…

With my new field, I now want to add some simple validation for any data that I enter in it. The validation, or business rule is that I want to only allow a date and time that is in the future. It would be silly to add a past date into a place where I want to store dates for the future.

“What kind of scheduling is that!?! Dumb Ass! Haw Haw” …Nelson exclaims.

Back to the Customer table designer in LightSwitch.

With the ScheduledVisitDateTime field selected, I head over to the Properties panel. At the bottom of the Properties panel is a link titled Custom Validation. I click it.

The Custom Validation link

The Custom Validation link

Clicking the Custom Validation link opens the code designer for my Customer entity. Specifically, the code designer opens with a procedure stub already created and titled ScheduledVisitDateTime_Validate()…

image

This is where I am going to add some logic that will validate that the date and time I enter into the ScheduledVisitDateTime field are in the future. So, that’s what I do…

image

So I run the application by hitting the F5 key.

KaBLAMMO. {Insert Roger Daltrey “yeeeaaaahhhh…..” Won’t Get Fooled Again screem here}

…Nelson has fallen off my shoulder. Not sure where he went to.

Meanwhile, back at the LightSwitch ranch…the application launches and displays my CustomerList screen. I select the first customer recordI purposely click the labelled Next Scheduled Visit date and select a date in the past.

Validation errors showing on screen.

Validation errors showing on screen.

Looks like my custom validation is working! LightSwitch shows both a notification in the tab for the CustomerList screen, as well as shows a red border around my new field. Clicking on the in the message in the tab brings up the error message that I defined in my validation code…

The tab for the screen showing the error message.

The tab for the screen showing the error message.

Hovering over the tooltip corner of red border error indicator of the field causes LightSwitch to display the same error message as a tooltip…

Same error message as a ToolTip.

Same error message as a ToolTip.

Cool!

Now, I update the value to a future datetime value and presto, the error messages go away. I save the record. I now am able to better manage my time by scheduling my next visit with my customers.

In my next post, I am going to extend this scheduling stuff by adding a new table with a relationship with this one. Doing so will let me add more granular scheduling, as well as set me up to better visualize my schedule by using queries.

Stay tuned!


Beth Massi (@bethmassi) explained Validating Collections of Entities (Sets of Data) in LightSwitch in this 8/31/2010 tutorial:

image One of the many challenging things in building n-tier applications is designing a validation system that allows running rules on both the client and the server and sending messages and displaying them back on the client. I’ve built a couple application frameworks in my time and so I know how tricky this can be. I’ve been spending time digging into the validation framework for LightSwitch and I have to say I’m impressed. LightSwitch makes it easy to write business rules in one place and run them in the appropriate tiers. Prem wrote a great article detailing the validation framework that he posted on the LightSwitch Team Blog yesterday that I highly recommend you read first:

Overview of Data Validation in LightSwitch Applications.

image22Most validation rules you write are rules that you want to run on both the client and the server (middle-tier) and LightSwitch does a great job of handling that for you. For instance when you put a validation rule on an entity property this rule will run first on the client. If there is an error the data must be corrected before it can be saved to the middle-tier. This gives the user an immediate response but also makes the application scale better because you aren’t unnecessarily bothering the middle-tier. Once the validation passes on the client, it is run again on the middle-tier. This is best practice when building middle-tiers - don’t ever assume data coming in is valid.

Validating sets (or collections) of data can get tricky. You usually want to validate the set on the client first but then you have to do it again on the middle-tier, not only because you don’t trust the client, but also because the set of data can change in a multi-user environment. You need to take the change set of data coming in from the client, merge it with the set of data stored in the database, and then proceed with validation. Dealing with change sets and merging yourself can get pretty tricky sometimes. What I didn’t realize at first is that LightSwitch also handles this for you.

Example – Preventing Duplicates

Let’s take an example that I was working on this week. I have the canonical OrderHeader --< OrderDetails >—Product data model. I want a rule that makes sure no duplicate products are chosen across OrderDetail line items on any given order. So if a user enters the same product twice on an order, validation should fail. Here I have an orders screen that lets me edit all the orders for a selected customer. For each order I should not be allowed to enter the same product more than once:

image

Where Do the Rules Go?

You can write rules in xxx_Validate methods for entity properties (fields) and the entity itself. From the Entity Designer select the property name then click the arrow next to the “Write Code” button to drop down the list of available methods. The property methods will display for the selected property. The entity methods are under “General Methods”. In my example if you select the Product property and drop down the list of methods you see two validation methods Product_Validate and OrderDetails_Validate.

image

The Property Methods change as you select an entity property (field) in the designer but the General Methods are always displayed for the entity you are working with. Property _Validate methods run both on the client and then again on the middle-tier. Entity _Validate methods run on the server, these are called DataService validations.

In my order entry scenario I was first tempted to write code in the DataService on the OrderHeader entity and check the collection of OrderDetails there. When I select the OrderHeader entity in the Entity Designer, click the arrow next to the “Write Code” button, and select OrderHeaders_Validate, a method stub is generated for me in the ApplicationDataService class. This is where I was thinking I could validate my set of OrderDetails and return an error if there were duplicates.

Public Class ApplicationDataService

    Private Sub OrderHeaders_Validate(ByVal entity As OrderHeader, ByVal results As EntitySetValidationResultsBuilder)
        Dim isValid = False
        'Write code to validate entity.OrderDetails collection
        '....
        If Not isValid Then
            results.AddPropertyError("There are duplicated products on the order")
        End If
    End Sub
End Class

However I quickly realized that this wouldn’t work because the OrderHeader entity would need to be changed for this validation to fire. If a user is editing a current order’s line items (OrderDetails) then only the validation for the OrderDetail would fire, not OrderHeader. Another issue with putting my rule in the ApplicationDataService class is the user would have to click save before the rule would fire and we’d have an unnecessary round-trip to the middle-tier. We want to be able to check this set for problems on the client first. Another issue is if I found an error then only a general validation message on the order would be presented to the user. They would have to stare at the screen to figure out the problem.

I think the reason why I went this route in the first place is because I was thinking I needed to merge the change set coming from the client with the set of data in the database and then validate that. It turns out that LightSwitch handles this for you. When you are validating a set of data (entity collection) on the client, you are validating what is on the user’s screen. When the validation runs on the server you are validating the merged set of data. NICE!

(Note that you can still access the change set via the DataWorkspace object but we’ll dive into that in a future post. )

The Right Way to Write this Rule

Since LightSwitch is doing all the heavy-lifting for me this rule gets a whole lot easier to implement. Since we’re checking duplicate products on each OrderDetail we need to put the code in the Product_Validate method of the OrderDetail entity (see screenshot above). Now we can write a simple LINQ query to check for duplicates.

Public Class OrderDetail

    Private Sub Product_Validate(ByVal results As EntityValidationResultsBuilder)

        If Me.Product IsNot Nothing Then

            'Look at all the OrderDetails that: 
            '   1) have a product specified (detail.Product IsNot Nothing)
            '   2) have the same product ID as this entity (detail.Product.Id = Me.Product.Id)
            '   3) is not this entity (detail IsNot Me)
            Dim dupes = From detail In Me.OrderHeader.OrderDetails
                          Where detail.Product IsNot Nothing AndAlso
                                detail.Product.Id = Me.Product.Id AndAlso
                                detail IsNot Me

            'If Count is greater than zero then we found a duplicate
            If dupes.Count > 0 Then
                results.AddPropertyError(Me.Product.ProductName + " is a duplicate product")
            End If
         End If
    End Sub
End Class

This validation will fire for every line item we add or update on the order. It will first fire on the client and Me.OrderHeader.OrderDetails will be the collection of line items being displayed on the screen. If this rule passes validation on the client then it will fire on the middle-tier and the Me.OrderHeader.OrderDetails will be the collection of line items that were sent from the client merged with the data on the server. This means that if another user has modified the line items on the order we can still validate this set of data properly. Also notice when we specify the error message, it is attached to the Product property on the OrderDetail entity so when the user clicks the message in the validation summary at the top of the screen, the proper row in the grid is highlighted for them.

image

Stay tuned for more How Do I videos on writing business rules.

Enjoy!


<Return to section navigation list> 

Windows Azure Infrastructure

James Urquhart went Exploring a healthy cloud-computing job market in his 9/1/2010 post to CNet News’ The Wisdom of Clouds blog:

image While much of the global economy struggles with creating jobs, the high-tech industry has had a better record than most. Yes, there are conflicting reports about IT job growth overall. But in general, the market remains quite strong for technologists. 

Within high tech itself, there is one standout opportunity for experienced, innovative people: cloud computing.

Even with the roar of VMworld-related cloud announcements--and counter-announcements--there have been a few really interesting data points that have come out this week with respect to cloud-related work.

For instance, Boto creator Mitch Garnaat, noted last week that there were 181 jobs listed for Amazon Web Services in the U.S. alone. Rob La Gesse countered that the second largest infrastructure as a service provider, Rackspace, had 175 jobs available in the States. Conversations with sources at Terremark, IBM, and others indicate that they are quickly expandng their cloud teams in response to increasing market demand.

image Those providing cloud infrastructure are also seeking very unusual skill combinations that combine infrastructure and data center architecture and operations, with application architecture and operations. Knowledge of server, network and storage, and the application of converged infrastructures to virtualized environments--or at least the ability to understand what that means--seems to be a baseline for the large systems vendors these days, such as my own employer, Cisco Systems, as well as IBM, Hewlett-Packard, Dell, Oracle, EMC, NetApp, and most others.

Perhaps the most surprising news about cloud jobs this week, however, was from an employment report from freelance contractor job site, Elance.

While development of applications to run in the cloud has largely been associated with Amazon Web Services to date, there appears to have been a sudden burst of interest in Google App Engine developers--in fact, a 10x increase since last quarter.

This burst of demand put App Engine ahead of AWS in Elance's ranking of top overall skills in demand, according to the Elance report.

I find this surprising only in that I haven't seen a lot of evidence of App Engine in the marketplace. But Krishnan Subramanian has a great post exploring why this might be. The combination of VMware's Spring framework with the App Engine cloud model seems to have attracted a large number of small Web applications--a claim that is reportedly substantiated by the research of VMware's Jian Zhen on Alexis.

The bottom line is that IT use of the cloud is growing very quickly, and demand for skills to enable that growth is climbing as a result. If you have skills related to IT operations, application administration/operations, or software development, now may be the time to dive into the cloud.

Graphic credit: CC pingnews/Flickr


Lori MacVittie’s (@lmacvittie) The Impossibility of CAP and Cloud essay of 9/1/2010 for F5’s DevCentral blog addresses Brewer’s CAP theorem and NP-Completeness in cloud computing:

image It comes down to this: the on-demand provisioning and elastic scalability systems that make up “cloud” are addressing NP-Complete problems for which there is no known exact solutions. 

At the heart of what cloud computing provides – in addition to compute-on-demand – is the concept of elastic scalability. It is through the ability to rapidly provision resources and applications that we can achieve elastic scalability and, one assumes, through that high availability of systems. Obviously, given my relationship to F5 I am strongly interested in availability. It is, after all, at the heart of what an application delivery controller is designed to provide. So when a theorem is presented that basically says you cannot build a system that is Consistent, Available, and Partition-Tolerant I get a bit twitchy.

np_complete[1]

Just about the same time that Rich Miller twitterbird was reminding me of Brewer’s CAP Theorem someone from HP Labs claimed to have solved the P ≠ NP problem (shortly thereafter determined to not be a solution after all), which got me thinking about NP-Completeness in problem sets, of which solving the problem of creating a distributed CAP-compliant system certainly appears to be a member.

CLOUD RESOURCE PROVISIONING is NP-COMPLETE

A core conflict with cloud and CAP-compliance is on-demand provisioning. There are, after all, a minimal set of resources available (cloud is not infinitely scalable, after all) with, one assumes, each resource having a variable amount of compute availability. For example, most cloud providers use a “large”, “medium”, and “small” sizing approach to “instances” (which are, in almost all cases, a virtual machine). Each “size” has a defined set of reserved compute (RAM and CPU) for use. Customers of cloud providers provision instances by size.

At first glance this should not a problem. The provisioning system is given an instruction, i.e. “provision instance type X.” The problem begins when you consider what happens next – the provisioning system must find a hardware resource with enough capacity available on which to launch the instance.

In theory this certainly appears to be a variation of the Bin packing problem (which is NP-complete). It is (one hopes) resolved by the cloud provider by removing the variability of location (parameterization) or the use of approximation (using the greedy approximation algorithm “first-fit”, for example). In a pure on-demand provisioning environment, the management system would search out, in real-time, a physical server with enough physical resources available to  support the requested instance requirements but it would also try to do so in a way that minimizes the utilization of physical resources on each machine so as to better guarantee availability of future requests and to be more efficient (and thus cost-effective).

Brewer’s CAP Theorem

imageIt is impractical, of course, to query each physical server in real-time to determine an appropriate location, so no doubt there is a centralized “inventory” of resources available that is updated upon the successful provisioning of an instance. Note that this does not avoid the problem of NP-Completeness and the resulting lack of a solution as data replication/synchronization is also an NP-Complete problem. Now, because variability in size and an inefficient provisioning algorithm could result in a fruitless search, providers might (probably do) partition each machine based on the instance sizes available and the capacity of the machine. You’ll note that most providers size instances as multiples of the smallest, if you were looking for anecdotal evidence of this. If a large instance is 16GB RAM and 4 CPUs, then a physical server with 32 GB of RAM and 8 CPUs can support exactly two large instances. If a small instance is 4GB RAM and 1 CPU, that same server could ostensibly support a combination of both: 8 small instances or 4 small instances and 2 large instances, etc… However, that would make it difficult to keep track of the availability of resources based on instance size and would eventually result in a failure of capacity availability (which makes the system non-CAP compliant).

imageHowever, not restricting the instances that can be deployed on a physical server returns us to a bin packing-like algorithm that is NP-complete which necessarily introduces unknown latency that could impact availability. This method also introduces the possibility that while searching for an appropriate location some other consumer has requested an instance that is provisioned on a server that could have supported the first consumer’s request, which results in a failure to achieve CAP-compliance by violating the consistency constraint (and likely the availability constraint, as well).

The provisioning will never be “perfect” because there is no exact solution to an NP-complete problem. That means the solution is basically the fastest/best it can be given the constraints. Which we often distill down to “good enough.” That means that there are cases where either availability or consistency will be violated, making cloud in general non-CAP compliant. 

The core conflict is the definition of “highly available” as “working with minimal latency.” Or perhaps the real issue is the definition of “minimal”. For it is certainly the case that a management system that leverages opportunistic locking and shared data systems could alleviate the problem of consistency, but never availability. Eliminating the consistency problem by ensuring that every request has exclusive access to the “database” of instances when searching for an appropriate physical location introduces latency while others wait. This is the “good enough” solution used by CPU schedulers – the CPU scheduler is the one and only authority for CPU time-slice management. It works more than well-enough on a per-machine basis, but this is not scalable and in larger systems would result in essentially higher rates of non-availability as the number of requests grows.

WHY SHOULD YOU CARE

Resource provisioning and job scheduling in general are in the class of NP-complete problems. While the decision problem to choose an appropriate physical server on which to launch a set of requested instances can be considered an instantiation of the Bin packing problem, it can also be viewed as a generalized assignment problem or, depending on the parameters, a variation of the Knapsack problem, or any one of the multiprocessor scheduling problems, all of which are NP-complete. Cloud is essentially the integration of systems that provide resource provisioning and may include job scheduling as a means to automate provisioning and enable a self-service environment. Because of its reliance on problems that are NP-complete we can deduce that cloud is NP-complete

NOTE: No, I’m not going to provide a formal proof. I will leave that to someone with a better handle on the reductions necessary to prove (or disprove) that the algorithms driving cloud are either the same or derivations of existing NP-Complete problem sets.

The question “why should I care if these problems are NP-Complete” is asked by just about every student in every algorithms class in every university there is. The answer is always the same: because if you can recognize that a problem you are trying to solve is NP-Complete you will not waste your time trying to solve a problem that thousands of mathematicians and computer scientists have been trying to solve for 50 years and have thus far not been able to do so. And if you do solve it, you might want to consider formalizing it, because you’ve just proved P = NP and there’s a $1,000,000 bounty out on that proof. But generally speaking, it’s a good idea to recognize them when you see them because you can avoid a lot of frustration by accepting up front you can’t solve it, and you can also leverage existing research / algorithms that have been proposed as alternatives (approximation algorithms, heuristics, parameterized algorithms, etc…) to get the “best possible” answer and get on with more important things. 

It also means there is no one optimal solution to “cloud”, only a variety of “good enough” or “approximately optimal” solutions. Neither the time required to provision can be consistently guaranteed or the availability of resources in a public cloud environment. This is, essentially, why the concept of reserved instances exists. Because if your priorities include high availability, you’d better consider budgeting for reserved instances, which is basically a more cost effective method of having a whole bunch of physical servers in your pool of available resources on stand-by.

But if your priorities are geared toward pinching of pennies, and availability is lower on your “must have” list of requirements, then reserving instances is an unnecessary cost – as long as you’re willing to accept the possibility of lower availability.

Basically, the impossibility of achieving CAP in cloud impacts (or should impact) your cloud computing strategy – whether you’re implementing locally or leveraging public resources. As I mentioned very recently – cloud is computer science, and if you understand the underlying foundations of the systems driving cloud you will be much better able to make strategic decisions regarding when and what type of cloud is appropriate and for what applications.

Related Posts


Chris Czarnecki posted Virtually Adopting the Cloud to the Learning Tree blog on 9/1/2010:

image Virtualization technology is mainstream for many organisations these days. Not only does it streamline management of IT resources, it also makes more efficient use of existing and new IT infrastructure, improves reliability and availability whilst reducing administration and improving deployment times. Many hosting providers are now offering virtual servers as they adopt this technology to gain the benefits I mention above. As a result they are often able to pass on cost savings to their customers.

Followers of my blogs will recognise that in previous posts I have highlighted some of the above benefits as advantages to be gained from adopting Cloud Computing too. This should come as no surprise as virtualization is one of the basic building blocks upon which Cloud Computing is built. Wether a public, private or hybrid cloud, independent of vendor, all make use of virtualization.

In many ways, Cloud Computing can be considered as an extension of virtualization technology, building functionality around and on top to simplify the provision and management of IT, and so much more. On a recent teach of the Learning Tree Cloud Computing course, an attendee suggested their organisation had adopted Cloud Computing because they were using virtualization technology (VMWare in this case). Just using virtualization technology on its own is not cloud Computing. By the end of the course the attendee could see the clear distinction yet close relationship between these technologies. The misunderstanding is very easy to make, and that’s why the Learning Tree course not only examines Cloud Computing and the products from many vendors in detail , it also explains the underlying technologies such as virtualization and Web services so attendees can really place these technologies into the context of Cloud Computing.


Ben Lorica reported Amazon's cloud platform still the largest, but others are closing the gap (based on job openings) to the O’Reilly Radar blog on 8/31/2010:

image Tim's recent tweet on the growing demand for Google App Engine skills inspired me to measure the popularity of the major cloud computing platforms. Elance is one of many job boards in our data warehouse of U.S. job postings1 , and I wanted to measure demand across many more job sites.

image Measured in terms of (U.S.) job postings, Amazon's Cloud Computing platform is still larger than Google's App Engine. What's interesting is that the gap has closed over the past year2:

pathint

Over the past two months, the other cloud platforms were roughly one-third (Google), one-fourth (Microsoft), and one-sixth (Rackspace) the size of Amazon. During the same period last year, these platforms were much smaller: Google was one-fifth, Microsoft was one-seventh, and Rackspace one-tenth the size of Amazon.


(1) Data for this post is for U.S. online job postings through 8/21/2010 and is maintained in partnership with SimplyHired.com. We use algorithms to dedup job posts: a single job posting can contain multiple jobs and appear on multiple job sites.

(2) I counted the number of unique job posts that mention each of the cloud computing platforms.

I’m not convinced that job openings are a harbinger of the comparative uptake of IaaS and PaaS services.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

VMware posted VMWorld 2010: Best Practices for the [Private] Enterprise Cloud on 8/31/2010:

image Yesterday morning during the first day of VMWorld 2010, our own Dan Gallivan, spoke to a completely packed room about the Best Practices for Building an Advanced Operating Model for the Enterprise Cloud. Dan’s presentation was geared to help the attendees understand what was critical in building a private cloud. The key takeaway is the eight main best practices for building an advanced operating model. These eight best practices are:

  1. Service Catalog Management – Catalog the internal & external sourced infrastructure services provided by the IT organization
  2. Service Level Management – All infrastructure services offered are wrapped with a service level management framework that enables the IT organization to track, measure, market, and deliver services that are in line with service targets defined in the Service Level Agreement (SLA)
  3. Service Cost Management – Enables chargeback and/or “showback” of costs of services rendered to internal and external consumers of infrastructure services.
  4. Service Managers – A new position that is responsible for working with the domain stakeholders to define, create, publish, market, and describe the infrastructure services.
  5. Self Service - Empowers infrastructure consumers to use self-service options and automated request fulfillment
  6. Service Request Management - Underlying workflow and processes that enable an infrastructure service request to be reliably submitted, routed, approved, monitored, and delivered
  7. Service Provisioning - Provides automated provisioning of virtual appliances, virtual machines, storage resources, network resources, operating systems, security personas, and applications
  8. Service Life-Cycle Management – Provides post service deployment management that tracks the ownership of the service, records changes to the service, and decommission the service

To learn more about Dan’s presentation, and CSC’s cloud initiatives, stop by our booth 1014 in Mascone South. Also stop by Mascone North Room 135 at 3pm Wednesday to hear Siki Giunta, Global Vice President, Cloud Computing & Cloud Services & Peter Allen, President Global Sales & Marketing, present on Business Transformation - The Silver Lining in the Cloud.


<Return to section navigation list> 

Cloud Security and Governance

The HPC in the Cloud blog announced Key [DMTF] Cloud Workload Portability Specification Becomes National Standard on 9/1/2010:

image Distributed Management Task Force, Inc. (DMTF), the organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, today announced that its Open Virtualization Format (OVF) standard version 1.1 has been adopted as an American National Standards Institute (ANSI) International Committee for Information Technology Standards (INCITS) standard. This achievement marks a major milestone in DMTF’s efforts to enable interoperable, platform-independent cloud and virtual management solutions.

image OVF has been designated as ANSI INCITS 469 2010 by the INCITS Executive Board. INCITS is accredited by ANSI, the organization that oversees the development of American National Standards. ANSI accreditation signifies that the procedures used by the standards body in connection with the development of American National Standards meet the Institute’s essential requirements for openness, balance, consensus and due process.

“ANSI adoption of OVF provides additional validation of the importance of this standard for virtualization management,” said Winston Bumpus, DMTF president. “Since its introduction, OVF has achieved wide scale adoption. We are extremely honored to receive this national recognition for our efforts to enable interoperable IT solutions for the virtual data center.”

DMTF will continue to work with INCITS to submit OVF to the International Standards Organization/International Electrotechnical Commission (ISO/IEC) for adoption as an international standard.

First published in March 2009, OVF simplifies interoperability, security and machine lifecycle management by describing an open, secure, portable, efficient and extensible format for the packaging and distribution of workloads consisting of one or more virtual machines and applications. This enables software developers to ship pre-configured, ready-to-deploy solutions and allows end-users to distribute applications into their environments with minimal effort. OVF is the cornerstone of DMTF’s virtualization standards efforts and is also considered an important foundation for the organization’s cloud standards development.

The ANSI INCITS 469 2010 standard can be purchased through the ANSI website at www.ansi.org and INCITS website at www.incits.org.

About DMTF

DMTF enables more effective management of millions of IT systems worldwide by bringing the IT industry together to collaborate on the development, validation and promotion of systems management standards. The group spans the industry with 160 member companies and organizations, and more than 4,000 active participants crossing 43 countries. The DMTF board of directors is led by 15 innovative, industry-leading technology companies. They include Advanced Micro Devices (AMD); Broadcom Corporation; CA, Inc.; Cisco; Citrix Systems, Inc.; Dell; EMC; Fujitsu; HP; Hitachi, Ltd.; IBM; Intel Corporation; Microsoft Corporation; Oracle; and VMware, Inc. With this deep and broad reach, DMTF creates standards that enable interoperable IT management. DMTF management standards are critical to enabling management interoperability among multi-vendor systems, tools and solutions within the enterprise. Information about DMTF technologies and activities can be found at http://www.dmtf.org.


See the Chris Hoff (@Beaker) continued his analysis of VMware’s (New) vShield: The (Almost) Bottom Line on 8/31/2010 item in the Other Cloud Computing Platforms and Services section below.


Rick Vanover asked Are RTOs Forgotten With Data Protection in the Cloud? in an 8/31/2010 post to the HPC in the Cloud blog:

Rick W. VanoverFor organizations considering a public-cloud solution, there are a number of underlying technologies that offer use cases to get started. One such area is data protection solutions, which in most situations is based on cloud storage providers such as Amazon S3, Nirvanix Storage Delivery Network, Azure Blob and others. Regardless of the level of interest an organization would have with a cloud solution, one of the first things is a basic cost comparison.

The basic premise of cloud storage is infinite capacity on demand at a periodic cost. For example, Amazon S3 charges $.15 per GB per month, plus some transfers. The natural response is to come up with the cost model for storage provisioned in a private-cloud or traditional storage area network (SAN) on premise. For data protection, this is presumably off-site storage accessible via a WAN or VPN link for traditional “brick and mortar” IT infrastructure.

imageCloud storage can be less expensive in a number of situations. The primary favorable cost scenario is comparing a traditional SAN purchase of storage that will sit idle and empty for a long period of time. The cloud storage option instead would cost only as consumed, yet still having effectively infinite scalability. For easy calculations, a protected Terabyte would be $1843.20 per year in the Amazon S3 cloud. Internal storage costs can vary widely, but that same $1843.20 could easily purchase a Terabyte of enterprise or midrange storage. The operating costs (power, cooling, space, support, etc.) can vary widely as well.

Regardless of how the cost argument is defended, one area that may be impacted for cloud-based data protection is the recovery time objective or RTO. The root of this discovery is that traditional storage options, even the venerable tape media category, have fast transfer rates. The hard question is do we put a price on an RTO? Generally speaking, a smaller the RTO leads to a higher cost solution.

The question becomes, if available bandwidth for a cloud-based data protection solution increases the RTO; how does that change the game? Is it even considered? Share your comments on how this topic has been addressed in your organization.


See the Ellen Messmer reported Trend Micro brings encryption to the cloud in an 8/31/2010 NetworkWorld story posted by the San Francisco Chronicle’s SF Gate blog item in the Other Cloud Computing Platforms and Services section below.


<Return to section navigation list> 

Cloud Computing Events

Patrick O’Rourke reported Items of interest from VMworld 2010 to the Microsoft Virtualization Team Blog on 8/1/2010:

imageHi - by now you might have read there's 17,000 of us attending VMworld in San Francisco. Huge crowds, just as Rick Vanover predicted. Lots of energy and excitement as you can imagine. This post is designed to bring some of the show to you, assuming you're not attending and queuing up to a session 45 mins before the start.

The expo hall started Monday. The attendees who found us were entertained to see 'the biggest little booth' at VMworld. Here's a view. Mike Neil, our GM of Windows Server and Server Virtualization, filmed this 20-minute video from the VMworld blogger lounge (aka, The Cube). After the expo floor closed, we and Citrix hosted a Tweetup. Great conversations and crowd - not to mention the excellent vanilla bean beer made by Thirsty Bear. The discussions reflected the still maturing adoption of virtualization:

  • a software gent from Fort Collins is attending VMworld to learn more about server consolidation. They have 100+ servers and are running out of room. We discussed what Hyper-V could do for him, and then after a bit of time it was revealed they're 99% SPARC shop with Java apps. We agreed that he's got to tackle the hardware before the Virt, otherwise Oracle OpenWorld is the show to attend ;-)
  • three gents from Rockwell Collins. One gent works desktop support, and the other two gents work the datacenter. We had a good laugh about the power struggle between both camps during VDI projects, but not the case at Rockwell Collins to their credit. The most interesting part of the discussion was the one gent's ability to use session-based access to his email while on the airplane.
  • I spoke with a few folks from Joann Fabrics. They're a big System Center shop, and are using ESXi w/ their Windows apps. I stepped into a funny trap when I told them, "My wife loves Joann Fabrics." I guess they hear that line all the time. Dana and crew were kind enough to stop by the booth yesterday, and hopefully we'll see you at a future MMS.

That evening, the Aug. 31 edition of USA Today starting hitting the streets starting in the East Coast. The front news section included this 'open letter' advertisement to VMware customers from Microsoft's Brad Anderson. It turned some heads so far. And, of course, VMware had an appropriate response.

Edwin Yuen published a blog worth reading, as it summarized our demos in the booth. Here's an excerpt:

imageSo at this year's VMworld, we are demoing the cloud solution that Outback Steakhouse created using Windows Azure Platform. Working with a partner, Outback Steakhouse developed and deployed an online marketing campaign in less than eight weeks - the flexibility and scalability of the cloud allowed them to support overwhelming customer response.  The marketing campaign met its goal of 500,000 fans in only 18 days. It's a great example of IT being able to satisfy business and marketing demands with a fast, cost-effective solution. [Emphasis added.]

We will also demo how we're helping customers use the same tools to control and manage Windows Azure-based applications, as they would applications running on Windows Server. Customers can use System Center Operations Manager to monitor the health of applications, whether the apps are on-premises or on Windows Azure, and in return get a complete view of how well all their IT services are running. We showed this demo at Microsoft Management Summit 2010. This solution provides the critical capability to manage your applications regardless of the infrastructure they may run on, whether it be your datacenter or the public cloud with Windows Azure.

Yesterday's keynote was probably the best of the 7 VMworld conferences that I've attended. It was a combination of game day celebration, painting pictures of the future, and showing what here or coming. There were several holes in the presentation. Go here to watch/listen to Mike Neil, Simon Crosby (CTO at Citrix) and Harry Labana (CTO at Citrix) comment on the keynote.

One of the more entertaining lines, or at least the takeaway, is when CEO Paul Maritz said the OS is no longer the center of innovation. His point is that the OS isn't going away, but rather the future innovation will be in virtualization, app frameworks and end-user access. This statement supports his company's lofty P/E ratio and investments in future revenue streams such as SpringSource, vBlock, View 4.5. Thankfully, we offer all that and more today:

OS (Windows Server Hyper-V), app framework (.NET), cloud-scale OS (Windows Azure), common identify and mngt (AD, System Center), desktop optimization (App-V, RDS, RemoteFX).

The meetings yesterday convinced me that Windows Azure [is] much different than EC2 (off-premises IaaS) and VMware's vCloud (private cloud, IaaS), but there's little understanding of what it can do for people today. Coca-Cola, The Tribune Company, RiskMetrics, and Outback Steakhouse are examples that help people understand. [Emphsis added.]

I hope you found this recap useful.

Patrick


Joe Panettieri reported on 8/1/2010 Microsoft Cloud Channel Chief [Gretchen O’Hara] Set to Address MSPs at the N-able Partner Summit of 10/20 through 10/22/2010 in Scottsdale, AZ:

Jon Roskill is Microsoft’s global channel chief. But check the Microsoft organization chart, and you’ll discover a cloud channel chief working closely with Roskill. Her name is Gretchen O’Hara, and she’s set to address a few hundred managed services providers (MSPs) during the N-able Partner Summit in October. What message will O’Hara share with MSPs? Here are some potential clues.

imageBy now, most readers know about Microsoft’s All In channel cloud strategy. At first glance, the strategy is new — announced shortly before the Microsoft Worldwide Partner Conference (WPC10) in July 2010. But O’Hara has spent considerable time working on Microsoft’s channel cloud strategy. My best estimate: She’s been focused on channel-related partner strategies since around 2007. Before that, she headed Microsoft’s competitive channel strategy vs. Google, Linux and Open Source from around 2005 to 2007, according to her LinkedIn Bio.

image During WPC10, O’Hara said Microsoft has about 16,000 SaaS and cloud partners leveraging BPOS (Business Productivity Online Suite). But when pressed a bit further, she conceded that only about 8,000 of those BPOS partners have two or more customer engagements under their belts. As most readers will recall, BPOS includes Exchange Online, SharePoint Online and other Microsoft-developed SaaS applications.

Key Questions

I wonder:

  • How has BPOS performed since WPC10? Quite a few partners are waiting on Microsoft to deliver Exchange 2010 and SharePoint 2010 via BPOS; right now, BPOS offers older versions of Exchange and SharePoint.
  • Has Microsoft considered any new pricing models and/or partner control models, potentially allowing MSPs and VARs to handle BPOS billing to end-customers? I suspect not, since such flexibility is currently reserved for very large service providers that add value to BPOS.
  • Also, what types of incentives has Microsoft introduced to get more partners to promote BPOS? During the recent XChange conference in Dallas, Microsoft apparently promoted new cloud incentives to motivate partners. What’s been the initial reaction to those incentives?
  • My biggest question of all: Can MSPs really, truly profit from BPOS applications. Or will the best Microsoft partners develop application-level expertise… and focus instead on the Windows Azure platform? [Emphasis added.]

I’ve got plenty more questions. I hope to pose a few to them to O’Hara during the N-able Partner Summit (Oct. 20-22, Scottsdale, Ariz.).

Please note: Speaker lineups for conferences change all the time. As a veteran of the conference business, I know Microsoft’s executives sometimes run into 11th hour scheduling conflicts. So, I’m not guaranteeing that O’Hara will be at the N-able event. But to the best of my knowledge, Microsoft’s PR team has confirmed O’Hara’s intention to participate. …

Read More About This Topic


The Cloud Tweaks blog claimed The Business Cloud Summit promises to be the UK’s premier Cloud event of 2010 on 9/1/2010:

The Business Cloud Summit 2010 – 30 November, London, England

The Business Cloud Summit 2010 will be Europe’s Cloud Computing event of the year. Unfolding over one day, it will comprise two highly focused streams, exploring current and future Cloud Computing issues in both the public and private sectors. The agenda will build on the success of the 2009 Summit, delivering a unique mix of focus and leading industry insight, and ensuring that the 2010 Summit will be marked in the diaries of CIOs, CEOs and COOs from across the UK and Europe.

image With dedicated content streams covering the key issues in both the public and private sectors, this one-day event will include top-level insight, relevant to all forward-thinking technology professionals, across all industries and all sectors; drilling down into the Cloud issues that affect central and local government, the NHS, education and the third sector.

The Business Cloud Summit is the only UK event of its type to offer specific content aimed at line of business managers in HR, finance, CRM and IT. It’s the only place where professionals from all areas of the Cloud industry will be brought together under one roof; infrastructure providers, buyers, end-users, influencers and decision makers.

The Cloud for 2010

According to IDC, 2009 was the year that Cloud Computing was ‘seeded’. In 2010 Cloud computing is now part of the mainstream. End users are embracing the cost and productivity benefits of the model with enthusiasm. At a time when the world is still emerging carefully from the worst economic downturn in living memory, lower start-up costs and total cost of ownership of Cloud Computing, delivering ROI of over 1000% in some cases, are welcomed with enthusiasm by CIOs, CEOs and CFOs in organisations across every business sector.

2010 is the year of Cloud adoption:
  • By 2012, a fifth of all businesses will own no IT assets – Gartner
  • The Cloud services market will surge to around $150bn in 2013 – Gartner
  • The market for cloud services will account for 10% of all IT spending by 2013 – IDC

For more information see: http://www.businesscloud9.com/summit/2010


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Gary Orenstein explained How VMware Plans to Control the Cloud in this 9/1/2010 post to GigaOm’s Structure blog:

image One area where VMware did not disappoint this week is breadth of vision. In just a handful of years, the company has gone from the defacto hypervisor provider to an all-encompassing software infrastructure vendor for virtualization and cloud computing. Even for someone watching the industry, the volume of announcements can be overwhelming.

imageVMware breaks down the product set into three layers: infrastructure, application platforms and end-user computing. With an eye on understanding specific products, here’s what I saw this week:

image Cloud Infrastructure and Management. VMware’s mainstay products – vSphere and vCenter — received plenty of attention at the show, specifically in the hands-on labs. However, what really galvanized the conference was the official release of vCloud. VMware’s entry into providing a complete cloud stack builds on the familiarity corporate data centers and service providers have with the vSphere data center virtualization platform and vCenter management products. This may give VMware a leg up in getting customers to adopt cloud architectures given the familiarity companies have with the underlying products. vCloud provides user portals, catalogs of common virtual machines, and security services for corporations and service providers so they can deliver self-service computing.

In conjunction with the official vCloud release, VMware also announced vCloud Datacenter Services, where service providers such as Verizon can deliver the same vCloud environment to corporations. The VMware strategy to empower both the enterprise and service providers with common infrastructure certainly makes sense and could help kick-start mass enterprise cloud adoption.

Also in the infrastructure category is vShield, a suite of virtualization security products that help deliver network services, such as firewalls, within vCloud environments.

Cloud Application Platforms. VMware boosted their Platform-as-a-Service offerings with vFabric, a cloud application platform that leverages the Spring Java development framework and the talent from the SpringSource acquisition. Other tools that have been integrated include the distributed data management layer GemFire from the Gemstone acquisition and the application messaging communications queue acquired with RabbitMQ. VMware has emphasized application portability at this tier and set up partnerships with Salesforce.com through VMforce and Google through AppEngine to allow enterprises to move applications across these platforms.

End-User Computing. VMware also announced a new version of VMware View intended to help manage end-user desktops across a proliferation of devices. VMware View is a big piece of VMware’s attempt to own more of the virtual desktop infrastructure (VDI) arena, an area we profiled in Virtual Desktops are Hot Again. Another piece of the end-user pie is ThinApp, a technology VMware acquired from ThinStall to help streamline application delivery.

The challenge for VMware now might be how to communicate the breadth and depth of their product portfolio in a way that’s easy to digest and understand. VMware certainly has the arsenal to blanket both the enterprise and service provider segments with comprehensive offerings, but it’s now a much more complex sell than the simple message of server consolidation through hypervisors and virtual machines.

Next year, I anticipate that we’ll see the fruits of VMware’s labors, with enterprise and service provider adopters sharing their experience with the overall community. That will be the ultimate proof of the next stage of VMware’s potential success.

Gary Orenstein is Host of The Cloud Computing Show.

Ellen Messmer reported Trend Micro brings encryption to the cloud in an 8/31/2010 NetworkWorld story posted by the San Francisco Chronicle’s SF Gate blog:

image Trend Micro is blazing a new trail with a service called SecureCloud intended to give enterprises a way to encrypt data in cloud-computing environments.

image SecureCloud allows you to maintain control over the encryption key used to secure data stored in the Amazon EC2, Eucalyptus or VMware vCloud cloud infrastructures. Other cloud-computing variants could be added in the future.

image"IT operations may be firing up [a remote virtual machine] image but we have security validating the integrity, and it's encrypted until it hits the cloud, and it's encrypting data at rest," according to Todd Thiemann, senior director of data center security and marketing at Trend Micro.

He notes that SecureCloud allows the IT department using either public or private cloud-computing services to answer the basic questions, "Is this image OK? And is it mine?"

Now in beta with general availability expected by year end, SecureCloud is provided through a Web site portal and makes use of policy-based encryption to allow access to a virtual-machine image as well as storing related activity logs.

In addition to offering the security service, Trend Micro is looking at making comparable software available to companies for on-premises use.

In a separate announcement, Trend Micro also unveiled an antimalware protection module for its VMware server security software, Deep Security 7.5. It includes integrity monitoring, log inspection and stateful firewall capabilities, and leverages the most recent VMware vShield Endpoint APIs. Trend Micro Deep Security 7.5 is expected to ship in October.

Read more about data center in Network World's Data Center section.

I’ve been agitating for Transparent Data Encryption (TDE) for SQL Azure and a feature similar to this for Windows Azure tables and blobs since Microsoft introduced these services.


Chris Hoff (@Beaker) continued his analysis of VMware’s (New) vShield: The (Almost) Bottom Line on 8/31/2010:

image After my initial post yesterday (How To Wield the New vShield (Edge, App & Endpoint) remarking on the general sessions I sat through on vShield, I thought I’d add some additional color given my hands-on experience in the labs today.

imageI will reserve more extensive technical analysis of vShield Edge and App (I didn’t get to play with endpoint as there is not a lab for that) once I spend some additional quality-time with the products as they emerge.

Because people always desire for me to pop out of the cake quickly, here you go:

You should walk away from this post understanding that I think the approach holds promise within the scope of what VMware is trying to deliver.  I think it can and will offer customers choice and flexibility in their security architecture and I think it addresses some serious segmentation, security and compliance gaps.  It is a dramatically impactful set of solutions that is disruptive to the security and networking ecosystem.  It should drive some interesting change.  The proof, as they say, will be in the vPudding.

Let me first say that from VMware’s perspective I think vShield “2.0″ (which logically represents many technologies and adjusted roadmaps both old and new) is clearly an important and integral part of both vSphere and vCloud Director’s future implementation strategies.  It’s clear that VMware took a good, hard look at their security solution strategy and made some important and strategically-differentiated investments in this regard.

All things told, I think it’s a very good strategy for them and ultimately their customers.  However, there will be some very interesting side-effects from these new features.

vShield Edge is as disruptive to the networking space (it provides L3+ networking, VPN, DHCP and NAT capabilities at the vDC edge) as it is to the security arena.  When coupled with vShield App (and ultimately endpoint) you can expect VMware’s aggressive activity in retooling their offers here to cause further hastened organic development,  investment, and consolidation via M&A in the security space as other vendors seek to play and complement the reabsorption of critical security capabilities back into the platform itself.

Now all of the goodness that this renewed security strategy brings also has some warts. I’ll get into some of them as I gain more hands-on experience and get some questions answered, but here’s the Cliff Note version with THREE really important points:

The vShield suite is the more refined/retooled/repaired approach toward what VMware promised in delivery three years ago when I wrote about it in 2007 (Opening VMM/HyperVisors to Third Parties via API’s – Goodness or the Apocalypse?)  and later in 2008 (VMware’s VMsafe: The Good, the Bad, and the Bubbly…“) and from 2009, lest we forget The Cart Before the Virtual Horse: VMware’s vShield/Zones vs. VMsafe API’s
_
Specifically, as the virtualization platform has matured, so has the Company’s realization that security is something they are going to have to take seriously and productize themselves as depending upon an ecosystem wasn’t working — mostly because doing so meant that the ecosystem had to uproot entire product roadmaps to deliver solutions and it was a game of “supply vs. demand chicken.”
_
However, much of this new capability isn’t fully baked yet, especially from the perspective of integration and usability and even feature set capabilities such as IPv6 support. Endpoint is basically the more streamlined application of APIs and libraries for anti-malware offloading so as to relieve a third party ISV from having to write fastpath drivers that sit in the kernel/VMM and disrupt their roadmaps. vShield App is the Zones solution polished to provide inter-VM firewalling capabilities.
_
Edge is really the new piece here and represents a new function to represent vDC perimeterized security capabilities.Many of these features are billed — quite openly — as relieving a customer from needing to use/deploy physical networking or security products.  In fact, in some cases even virtual networking products such as the Cisco Nexus 1000v are not usable/supportable.  This is and example of a reasonably closed, software-driven world of Cloud where the underlying infrastructure below the hypervisor doesn’t matter…until it does.
_

vShield Edge and App are, in the way they are currently configured and managed, very complex and unwieldy and the performance, resiliency and scale described in some of the sessions is yet unproven and in some cases represents serious architectural deficiencies at first blush.  There are some nasty single points of failure in the engineering (as described) and it’s unclear how many reference architectures for large  enterprise and service provider scale Cloud use have really been thought through given some of these issues.
_
As an example, only being able to instantiate a single (but required) vShield App virtual appliance per ESX host brings into focus serious scale, security architecture and resilience issues.  Being able to deploy numerous Edge appliances brings into focus manageability and policy sprawl concerns.There are so many knobs and levers leveraged across the stack that it’s going to be very difficult in large environments to reconcile policy spread over the three (I only interacted with two) components and that says nothing about then integrating/interoperating with third party vSwitches, physical switches, virtual and physical security appliances.  If you think it was challenging before, you ain’t seen nothin’ yet.
_

The current deployment methodology reignites the battle that started to rage when security teams lost visibility into the security and networking layers and the virtual administrators controlled the infrastructure from the pNIC up.  This takes the gap-filler virtual security solutions from small third parties such as Altor which played nicely with vCenter but allowed the security teams to manage policy and blows that model up.  Now, security enforcement is a commodity feature delivered via the virtualization platform but requires too complex a set of knowledge and expertise of the underlying virtualization platform to be rendered effective by role-driven security teams.

While I’ll cover items #1 and #2 in a follow-on post, here’s what VMware can do in the short term to remedy what I think is a huges issue going forward with item #3, usability and management.

Specifically, in the same way vCloud Director sits above vCenter and abstracts away much of the “unnecessary internals” to present a simplified service catalog of resources/services to a consumer, VMware needs to provide a dedicated security administrator’s “portal” or management plane which unites the creation, management and deployment of policy from a SECURITY perspective of the various disparate functions offered by vShield App, Edge and Endpoint.

If you expect a security administrator to have the in-depth knowledge of how to administer the entire (complex) virtualization platform in order to manage security, this model will break and cause tremendous friction.  A security administrator shouldn’t have access to vCenter directly or even the vCloud Director interfaces.

Since much of the capability for automation and configuration is made available via API, the notion of building a purposed security interface to do so shouldn’t be that big of a deal.  Some people might say that VMware should focus on building API capabilities and allow the ecosystem to fill the void with solutions that take advantage of the interfaces.  The problem is that this strategy has not produced solutions that have enjoyed traction today and it’s quite clear that VMware is interested in controlling their own destiny in terms of Edge and App while allowing the rest of the world to play with Endpoint.

I’m sure I’m missing things and that given the exposure I’ve had (without any in-depth briefings) there may be material issues associated with where the products are given their early status, but I think it important to get these thoughts out of my head so I can chart their accuracy and it gives me a good reference point to direct the product managers to when they want to scalp me for heresy.

There’s an enormous amount of detail that I want to/can get into.  The last time I did that it ended up in a 150 slide presentation I delivered at Black Hat…

Allow me to reiterate what I said in the beginning:

You should walk away from this post understanding that I think the approach holds promised within the scope of what VMware is trying to deliver.  I think it can and will offer customers choice and flexibility in their security architecture and I think it addresses some serious segmentation, security and compliance gaps.  It is a dramatically impactful set of solutions that is disruptive to the security and networking ecosystem.  It should drive some interesting change.  The proof, as they say, will be in the vPudding.

…and we all love vPudding.


Jeff Barr announced a New CloudFront Feature: Invalidation on 8/31/2010:

imageUnder normal conditions, an Amazon S3 object in a bucket that is part of a CloudFront distribution can be cached at a CloudFront edge location per the object's TTL (Time to Live). In many situations it is possible to come up with a reasonable value for the TTL ahead of time. In other cases you may want the benefits of CloudFront's caching but you may also need to make changes to the S3 object at unpredictable times.

We've just added a new invalidation function to the CloudFront API. You can now POST a list of one or more objects to a CloudFront distribution and the objects will be removed from all of the edge locations within minutes. The invalidation happens in an asynchronous fashion and you can have several invalidation requests pending at the same time.

You can use this new feature in many different ways. Here are some ideas:

  1. Update a CSS style sheet or some JavaScript that changes very infrequently.
  2. Remove a video that was not properly encoded.
  3. Remove information (e.g. a news story) that is inaccurate or no longer relevant.
  4. Remove information that is the subject of a DMCA takedown notice.

There are no charges for the first 1000 invalidations per month. After that, each one will cost you $0.005 (one half of one cent).

You can still use the TTL feature and you can also use versioned URLs. Both techniques are preferred when you have the ability to control or predict the proper hold time for an object. There's no additional cost for either one, and there's no need to wait for the invalidation to take effect (typically 10 to 15 minutes). Invalidation is appropriate when the hold time is unpredictable.

TTLs and versioned URLs are great when you have tight control over the object's lifetime, with new objects replacing the old on a regular cycle or as part of a planned release. Invalidation is appropriate when objects can change with little or no notice.

The following third-party products already include support for this new feature:

Let me know if your product supports it, and I'll amend this blog post to include it. Leave a comment or email me at awseditor@amazon.com.

The AWS Simple Monthly Calculator now supports CloudFront Invalidations and RDS Reserved DB Instances.


<Return to section navigation list> 

0 comments: