Friday, October 08, 2010

Windows Azure and Cloud Computing Posts for 10/8/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb3113[3]  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

My Vote for More Secure ADO 2.x Connections to SQL Azure in Access 2010 post of 10/8/2010 reported:

Patrick Wood wrote: 

imageLet's Vote to get Microsoft Access ADO 2.x Connections to SQL Azure on the SQL Azure Feature Voting Forum http://tinyurl.com/2e3m978 ! ODBC linked tables are a security risk and we need ADO features!

image This will enable more secure connections to SQL Azure and provide additional functionality. ODBC linked tables and queries expose your entire connection string including your server, username, and password.

I agreed and added three votes.

You can add your vote(s) for this feature by clicking here.

Note: Microsoft Access 2010 In Depth’s Chapter 28, “Upsizing Access Applications to Access Data Projects and SQL Azure,” includes step-by-step instructions for linking to SQL Azure databases with ODBC.


George Huey upgraded his SQL Azure Migration Wizard v3.4 on 10/7/2010:

imageThe upgrade doesn’t affect my Using the SQL Azure Migration Wizard v3.3.3 with the AdventureWorksLT2008R2 Sample Database post significantly.


David Ramel’s Using WebMatrix with PHP, OData, SQL Azure, Entity Framework and More post of 9/30/2010 to Visual Studio Magazine’s Data Driver column describes integration recently released products and features, and provides live examples:

I've written before about Microsoft's overtures to the PHP community, with the August release of PHP Drivers for SQL Server being the latest step in an ongoing effort to provide interoperability between PHP and Microsoft technologies.

imageWith a slew of other new products and services released (relatively) recently, such as SQL Azure, OData and WebMatrix, I decided to see if they all work together.

imageAmazingly, they do. Well, amazing that I got them to work, anyway. And as I've said before, if I can do it, anyone can do it. But that's the whole point: WebMatrix is targeted at noobs, and I wanted to see if a hobbyist could use it in conjunction with some other new stuff.

image WebMatrix is a lightweight stack or tool that comes with stripped-down IIS and SQL Server components, packaged together so pros can do quick deployments and beginners can get started in Web development.

WebMatrix is all over PHP. It provides PHP code syntax highlighting (though not IntelliSense). It even includes a Web Gallery that lets you install popular PHP apps such as WordPress (see Ruslan Yakushev's tutorial). Doing so installs and configures PHP and the MySQL database.

I chose to install PHP myself and configure it for use in WebMatrix (see Brian Swan's tutorial).

After doing that, I tested the new PHP Drivers for SQL Server. The August release added PDO support, which allows object-oriented programming.

I used the PDO driver to access the AdventureWorksLTAZ2008R2 test database I have hosted in SQL Azure. After creating a PDO connection object, I could query the database and loop over the results in different ways, such as with the PDO FETCH_ASSOC constant:

(while $row = $sqlquery->fetch(PDO::FETCH_ASSOC) )

or with just the connection object:

foreach ($conn->query($sqlquery) as $row)

which both basically return the same results. You can see those results as a raw array on a site I created and deployed on one of the WebMatrix hosting partners, Applied Innovations, which is offering its services for free throughout the WebMatrix beta, along with several other providers. (By the way, Applied Innovations provided great service when I ran into a problem of my own making, even though the account was free.)

Having tested successfully with a regular SQL Azure database, I tried using PHP to access the same database enabled as an OData feed, courtesy of SQL Azure Labs. That eventually worked, but was a bit problematic in that this was my first exposure to PHP and I haven't worked that much with OData's Atom-based XML feed that contains namespaces, which greatly complicated things.

It was simple enough to grab the OData feed ($xml = file_get_contents ("ODataURL"), for one example), but to get down to the Customer record details buried in namespaces took a lot of investigation and resulted in ridiculous code such as this:

echo $xmlfile->entry[(int)$customerid]->children
('http://www.w3.org/2005/Atom')->content->
children
('http://schemas.microsoft.com/ado/2007/08/dataservices/metadata')->
children
('http://schemas.microsoft.com/ado/2007/08/dataservices')->
CompanyName->asXML();

just to display the customer's company name. I later found that registering an XPath Namespace could greatly reduce that monstrous line, but did require a couple more lines of code. There's probably a better way to do it, though, if someone in the know would care to drop me a line (see below).

Anyway, the ridiculous code worked, as you can see here.

I also checked out the OData SDK for PHP. It generates proxy classes that you can use to work with OData feeds. It worked fine on my localhost Web server, but I couldn't get it to work on my hosted site. Microsoft Developer Evangelist Jim O'Neil, who used the SDK to access the "Dallas" OData feed repository, suggested I needed "admin privileges to configure the php.ini file to add the OData directory to the include variable and configure the required extensions" on the remote host. I'm sure that could be done easily enough, but I didn't want to bother the Applied Innovations people any further about my free account.

So I accessed OData from a WebMatrix project in two different ways. But that was using PHP files. At first, I couldn't figure out how to easily display the OData feed in a regular WebMatrix (.cshtml) page. I guess I could've written a bunch of C# code to do it, but WebMatrix is supposed to shield you from having to do that. Which it does, in fact, with the OData Helper, one of several "helpers" for tasks such as using Windows Azure Storage or displaying a Twitter feed (you can see an example of the latter on my project site). You can find more helpers online.

The OData Helper made it trivial to grab the feed and display it in a WebMatrix "WebGrid" using the Razor syntax:

@using Microsoft.Samples.WebPages.Helpers
@{
var odatafeed = OData.Get("[feedurl]");
var grid = new WebGrid(odatafeed, columnNames : new []{"CustomerID",
"CompanyName", "FirstName", "LastName"});
}
@grid.GetHtml();

which results in this display.

Of course, using the built-in SQL Server Compact to access an embedded database was trivial:

@{
var db = Database.OpenFile("MyDatabase.sdf");
var query = "SELECT * FROM Products ORDER BY Name";
var result = db.Query(query);
var grid = new WebGrid(result);
}
@grid.GetHtml();

which results in this.

Finally, I tackled the Entity Framework, to see if I could do a regular old query on the SQL Azure database. I clicked the button to launch the WebMatrix project in Visual Studio and built an Entity Data Model that I used to successfully query the database with both the ObjectQuery and Iqueryable approaches. Both of the queries' code and resulting display can be seen here.

Basically everything I tried to do actually worked. You can access all the example results from my Applied Innovations guest site. By the way, it was fairly simple to deploy my project to the Applied Innovations hosting site. That kind of thing is usually complicated and frustrating for me, but the Web Deploy feature of WebMatrix really streamlined the process.

Of course, WebMatrix is in beta, and the Web Deploy quit working for me after a while. I just started FTPing all my changed files, which worked fine.

I encountered other little glitches along the way, but no showstoppers. For example, the WebMatrix default file type, .cshtml, stopped showing up as an option when creating new files. In fact, the options available for me seemed to differ greatly from those I found in various tutorials. That's quite a minor problem, but I noticed it.

As befits a beta, there are a lot of known issues, which you can see here.

Overall, (much like LightSwitch) I was impressed with WebMatrix. It allows easy ASP.NET Web development and deployment for beginners and accommodates further exploration by those with a smattering of experience.

We'd love to hear your thoughts about WebMatrix, Microsoft's embrace of PHP, or (kind of) new stuff like OData and SQL Azure in general. Comment below or drop me a line.

No significant articles today.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Vittorio Bertocci (@vibronet) announced his New Online Demo: Introducing FabrikamShipping SaaS on 10/7/2010:

image722[3]Last June, the beginning of the fiscal year for Microsoft, I’ve been asked to add to my identity for developers mission an additional focus: helping ISVs to take advantage of the Windows Azure Platform for writing Software as a Service solutions.

Link to 00:30:35 Channel9 video

imageRather than devising abstract guidance, we decided to tackle the challenge by walking a mile in YOUR shoes. We picked an existing demo application which was originally designed for a single tenant. We chose FabrikamShipping, which started as an identity demo and later became a showcase for the .NET platform at Bob’s keynote last TechEd.  Together with our good friends at Southworks we worked on bringing the application to the cloud, tackling on the challenges that each of you have to cope with when moving toward a subscription based model: onboarding customers, preserving existing IP, adapting the offering features to different customer types, handling identity and access both from business and web users (surprised? ;-)), user accounts activation, managing notifications, collecting payments and in general handling billing relationships, balance the tradeoffs between isolated resources and multi-tenancy, exposing and securing OData services… the list goes on, but you get the picture. On top of that, we had the further complication derived from the guidance component of the project: not only we had to solve the problem, we had to walk a fine line between having something working and something that developers could easily read and understand.

image

Well, today I am proud to finally unveil the first release of our new sample: meet FabrikamShipping SaaS.

FabrikamShipping SaaS is a complete subscription based solution running on the Windows Azure platform and publicly available thru www.fabrikamshipping.com. It offers a web-based customer onboarding UI, which anybody can use for creating test subscriptions and obtain their very own application instance, dynamically provisioned by a simple but powerful provisioning engine. Thanks to our partnership with our friends evangelists at PayPal, it even demonstrates how to integrate payments and billing via an external provider!

Together with the live web site, we offer a package with the full source code of the solution (which runs in the DevFabric!) and a companion package which can help you to experience those parts of the demo that require some code on the client, such as the OData services secured via OAuth2 (we give you client code for that) and the enterprise edition which require an on-premises identity provider (SelfSTS to the rescue).

image

I have recorded a brief video in which I give an overview of FabrikamShipping SaaS architecture, which you can find here. In the next few hours I will also upload walkthrough videos demonstrating the ways in which you can interact with the demo, however some of them are so simple (and verbosely hand-holding) that I am sure you’ll just go through it without the need of any guidance. Here there’s a list of things to do.

Create a new Small Business subscription

The demo offers 2 subscription levels: enterprise and small business. In order to create a Small Business subscription all you need is a browser and a LiveID (and/or Google) account. That’s what I would suggest to start with.

Use the Companion to explore AdventureWorks

The enterprise subscription offers more advanced features, such as exclusive use of resources (as opposed to the shared resources regimen in multitenant systems: watch the video for more details) and single sign on with on-premises identity providers. As a result, creating and consuming an enterprise edition subscription has some requirements on the client.

In order to help you to experience an enterprise subscription we pre-provisioned a tenant, AdventureWorks, and stored in the companion package all the necessaire for consuming the application instance and call its OData services. Download the companion, read the (brief) instructions and try the thrill of being an AdventureWorks employee for a day!

[limited availability] Use the Companion to create a new Enterprise Subscription

You can also use the companion for creating a new enterprise level subscription: in fact, the onboarding experience is IMO very interesting. If you have an ADFS2 instance available, you can even do that without the need for the companion package. However, be warned. Creating an enterprise subscription takes resources and requires manual steps from us (ie creating a new hosted service and storage account), which means that 1) between the moment in which you submit the request and the moment in which the instance is ready a long time may pass, especially if you do it during the weekend or where here it’s night 2) we may not process your request if we already exhausted the quota for the day. However that should not be a big problem, because not only you can see how an enterprise subscription works by looking at AdventureWorks, but especially because you can…

…Explore the solution in the Source Code package

Ta dah! The entire solution source code is available for you to slice and dice. Download the source code package, run the setup, open the FabrikamShipping solution, hit F5… and you’ll get an enterprise instance, a small business instance and a subscription console instance running in the devfabric. The provisioning worker process does not work in that cloud-less environment, but all the code is there for you to uncover its secrets. In fact, there’s quite a lot of interesting stuff in there! Below there’s a non-exhaustive list of things you can expect to see in action from the FabrikamShipping SaaS solution:

  • A reusable pattern for building subscription based apps on the Windows Azure platform
  • General-purpose onboarding UI engine
  • Full isolation and multi-tenant application deployment styles
  • Integration with PayPal Adaptive Payment APIs for one-time and preapproved continuous payments
  • How to run complex processes in Worker Roles, including compensation logic and handling external events
  • Message-activated task execution
  • Handling notifications
  • Automated provisioning
  • Email notifications
  • Dynamic Source Code customization and creation of Windows Azure packages via CSPACK from a worker role
  • Creation of SQL Azure databases from a worker role
  • Self-service authorization settings
  • Using the Access Control Service (ACS) Management API for automating access and relationship handling
  • A fully functional user onboarding system from multiple web identity providers, including account activation via automated mails
  • Multi-tenant MVC application authentication based on Windows Identity Foundation (WIF)
  • Securing OData services with ACS and full WIF integration
  • ...

Not bad, I would say :-) but you be the judge, of course.

Now, as it is in the style of this blog, here comes the disclaimer.

Remember, this is just a demo. We played with this baby quite a bit before releasing it and we are happy with it, but it has no SLA whatsoever. It can go down at any moment. It can occasionally break. It is based on pre-release code, itself without ANY SLA, that from time to time WILL fail. Also, don’t imagine that there’s an operation team maintaining this thing: we are a bunch of dudes with day jobs, that from time to time will look at the administrative side of the solution and smooth things out but that for most of the time will be working on something else (like PDC, TechEd Europe, the next bunch of hands-on labs, more samples… etc). If you find the demo down, don’t assume that there’s anything wrong with Windows Azure, it’s practically certain it will be the demo itself.

Also: this is a demo, not a best practice. This is the way in which we solved some of the problems we faced, but it’s not always necessarily the best way. Some of those things are very, very new and reality is that multiple people will have to go through that before a best practice will emerge.

In fact, I want to take this chance for thanking the Southwork crew (especially Matias, Sebastian, Lito, PC) who tirelessly worked with me on this. Without their special mix of talent and skin in the game, mixed with years of experience working together and infinite patience for handling my (I’m told :-)) “excessive attention to detail”, I don’t think we would have pulled this off. Thank you guys!

That said. We are really excited to finally give you the chance to play with the demo. Building this demo on top of the PaaS offered by the Windows Azure Platform gave us the chance to experience firsthand how the right foundation can greatly simplify the development of this new breed of solutions, and I am just giddy at the thought that now you’ll be able to experience this, too! The SaaS model is here to stay: I hope that FabrikamShipping SaaS will help you in your projects!


Directions on Microsoft published an Azure AppFabric Secures and Connects Applications report on 10/6/2010. The report is free to subscribers; Basic or Complete subscriptions are available for $895 or $1,495 per year, respectively:

image722[3]Windows Azure AppFabric, part of the Windows Azure platform, provides cloud-based services to federate application security systems and connect users and applications across the Internet. Azure AppFabric's Access Control service can relieve developers from writing custom authentication code and let them instead rely on existing identity providers. Azure AppFabric's Service Bus relays application communication across organizational boundaries, potentially reducing risky firewall configuration requirements. The services can help connect customer- and partner-facing applications, such as extranets, to on-premises systems and could also benefit applications on external hosting platforms such as Windows Azure.

imageSections in the Microsoft Azure Report

  • Azure and the Evolution of Azure AppFabric
  • AppFabric Services History and Naming
  • Access Control Reduces Security Coding Tasks
  • Service Bus Connects Applications
  • Customers, Future Features
  • Pricing Based on Transactions, Connections, and Data Transferred
  • Resources

This Report Contains [2,534 words]

Subscribe to read full report
Learn more about our analysis
Members login to read full report


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Ryan Dunn (@dunnry) and SteveMarx (@smarx) released Cloud Cover Episode 29 - Working with CSPack (00:38:07) to Channel9 on 10/8/2010:

image

Join Ryan and Steve each week as they cover the Microsoft cloud. You can follow and interact with the show at@cloudcovershow
In this episode:  

  • imageWe talk about how to manually package your services using CSPack.exe and when you would do it.
  • imageShow how to deploy WebMatrix projects to Windows Azure
  • Talk about differences in deploying .NET 4.0 projects as compared to .NET 3.5

Show Links:

Windows Azure Platform Training Kit - September 2010 Update
Windows Azure Application Monitoring Management Pack
SQL Azure Connection Management
Update on ASP.NET Vulnerability
Announcing NuPack, MVC3 Beta, and WebMatrix Beta 2
Building, Running, and Packaging Windows Azure Applications From the Command Line
Deploying WebMatrix Applications to Windows Azure [see post below]

See the following two posts.


SteveMarx (@smarx) published Deploying WebMatrix Applications to Windows Azure on 10/8/2010:

image In this week’s Cloud Cover episode, Ryan and I deploy an application to Windows Azure that was built using the new WebMatrix beta. For the most part, the process is what I described in my last post, “Building, Running, and Packaging Windows Azure Applications From the Command Line.” In this post, I’ll describe the few extra things I needed to do to get things to work.

imageFor those who did watch the show, you’ll remember that I didn’t have a working example after upgrading to WebMatrix Beta 2, and I wasn’t sure what was broken. It turned out to simply be a mistake on my part. I had left an old binary (from the Beta 1 bits) in the bin folder of my application, and that mismatch was what was breaking the application. The steps I’ll describe below do work for both Beta 1 and Beta 2 applications.

Adding the right assemblies to the bin folder

imageOne common cause of deployment errors in Windows Azure is a mismatch between the .NET assemblies in the Global Assembly Cache (GAC) in your development environment versus the assemblies in Windows Azure. When you develop with Visual Studio, you need to remember to mark any assembly as “Copy Local” if it’s not part of a standard .NET installation (like what we have in the cloud).

When building an application outside of Visual Studio, such as with the WebMatrix tools, you’ll need to make sure that the bin folder of your application contains those assemblies you would have marked “Copy Local” in Visual Studio.

I couldn’t find a list of assemblies I needed for WebMatrix applications, so I did something creative to find out. Once I had my application built, I used WebMatrix’s publish feature to publish it to my local machine. The two options for publishing are Web Deploy and FTP Publishing. I used the latter against an FTP server running on localhost.

Looking at what was uploaded by the FTP process, I found the following assemblies in the bin folder:

  • AdminPackage.dll
  • Microsoft.Web.Infrastructure.dll
  • Microsoft.WindowsAzure.StorageClient.dll (this one was already there before the publish, because I added it when I used Windows Azure blobs in the application)
  • NuPack.Core.dll
  • System.Web.Helpers.dll
  • System.Web.Razor.dll
  • System.Web.WebPages.Deployment.dll
  • System.Web.WebPages.dll
  • System.Web.WebPages.Razor.dll
  • WebMatrix.Data.dll

To that list, I also added System.Web.Mvc.dll, which it seems WebMatrix relies on but doesn’t publish with your application.

Making sure ASP.NET handles all URLs

Per the instructions in the WebMatrix Beta 2 Release Readme, I created a web.config file to make sure that ASP.NET saw all web requsets (and not just those to known extensions like .aspx). While I was there, I also made index.cshtml the default document. Here’s the web.config I ended up with:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <defaultDocument enabled="true">
	    <files>
                <clear />
                <add value="index.cshtml" />
            </files>
        </defaultDocument>
        <modules runAllManagedModulesForAllRequests="true" />
    </system.webServer>
</configuration>
Using the .NET 4 runtime

By default, Windows Azure runs your application using .NET 3.5 SP1 (which was the version available when Windows Azure was first released). To use a different runtime, you need to add a parameter when creating your application package. Visual Studio does this for you automatically if your application targets .NET 4, but when you’re packaging using the command line tools, you need to do this yourself.

Because the WebMatrix libraries require .NET 4, I needed to be careful with how I packaged the application to make sure I got the right runtime version. The way you specify that you want the .NET 4 runtime is by creating a file that contains the string TargetFrameworkVersion=v4.0 and using the rolePropertiesFile parameter to tell cspack to apply that setting to your role. My final cspack command looked like this:

cspack ServiceDefinition.csdef /role:helloworld;helloworld /rolePropertiesFile:helloworld;roleproperties.txt
/out:HelloRazor.cspkg
Packaging and deploying to the cloud

With a correct bin folder and web.config, I was able to deploy to the cloud using the SDK command line tools as described in my previous post.

Bonus: a utility to copy assemblies from the GAC

Now that I know what assemblies are required, I’d rather not have to publish the application locally each time to get the bin folder right, so I wrote a small application to fetch assemblies from the GAC and copy them locally for me. Just compile the following code and run it with the target directory (usually bin) and a list of assemblies:

using System;
using System.Reflection;
using System.Linq;
using System.IO;

public class CopyAssembly
{
    public static void Main(string[] argv)
    {
        if (argv.Length < 2)
        {
            Console.WriteLine("Usage: CopyAssembly <target-directory> <assembly-name-1> <assembly-name-2> ...");
            Console.WriteLine("CopyAssembly will attempt to load each of the assemblies (from the GAC or");
            Console.WriteLine("  elsewhere) and copy them to the target directory.  It will not overwrite");
            Console.WriteLine("  existing files.");
            return;
        }
            
        Directory.CreateDirectory(argv[0]);
            
        foreach (var assemblyName in argv.Skip(1))
        {
            var assembly = Assembly.LoadWithPartialName(assemblyName);
            if (assembly != null)
            {
                var fileInfo = new FileInfo(assembly.Location);
                try
                {
                    File.Copy(assembly.Location,
                        Path.Combine(argv[0], fileInfo.Directory.GetFiles(fileInfo.Name)[0].Name));
                    Console.WriteLine("Found and copied {0}.", assemblyName);
                }
                catch (IOException e)
                {
                    Console.WriteLine(e.Message);
                }
            }
            else
            {
                Console.WriteLine("Unable to locate assembly {0}.", assemblyName);
            }
        }
    }
}
More to come

You can probably tell from this week’s Cloud Cover episode that I’m quite impressed with WebMatrix and the new Razor syntax for ASP.NET Web Pages. I’m sure I’ll be building some demos using these new technologies, along with ASP.NET MVC 3 (which was just released in beta form along with WebMatrix Beta 2).

Let me know what specific topics you’d like to see covered that integrate WebMatrix and Windows Azure. My contact information is on the right side of my blog, so send me an email or a tweet.


Maarten Balliauw explained Using MvcSiteMapProvider throuh NuPack on 10/8/2010:

NuPackProbably you have seen the buzz around NuPack, a package manager for .NET with thight integration in Visual Studio 2010. NuPack is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. If you download and install NuPack into Visual Studio, you can now reference MvcSiteMapProvider with a few simple clicks!

From within your ASP.NET MVC 2 project, right click the project file and use the new “Add Package Reference…” option.

Add package reference

Next, a nice dialog shows up where you can just pick a package and click “Install” to download it and add the necessary references to your project. The packages are retrieved from a central XML feed, but feel free to add a reference to a directory where your corporate packages are stored and install them through NuPack. Anyway: MvcSiteMapProvider. Just look for it in the list and click “Install”.

MvcSiteMapProvider in NuPack

Next, MvcSiteMapProvider will automatically be downloaded, added as an assembly reference, a default Mvc.sitemap file is added to your project and all configuration in Web.config takes place without having to do anything! I’m sold :-)

Disclaimer for some: I’m not saying NuPack is the best package manager out there nor that it is the best original idea ever invented. I do believe that the tight integration in VS2010 will make NuPack a good friend during development: the process of downloading and including third party components in your application becomes frictionless. That’s the aim for NuPack, and also the reason why I believe this tool matters and will matter a lot!


SteveMarx (@smarx) described Building, Running, and Packaging Windows Azure Applications From the Command Line on 10/8/2010:

imageMost developers who build Windows Azure applications do so using Visual Studio, Eclipse, or the PHP command line tools. All of these IDEs and tools are based on the same underlying Windows Azure SDK (which you usually get with the Visual Studio tools but can also get as a standalone download).

imageIn this week’s episode of Cloud Cover, Ryan and I explore packaging up a Windows Azure application using only the command line tools in the SDK. In this post, I’ll share some more details by walking you through building, running, and packaging a Windows Azure application, all from the command line.

Step 1: Authoring a web site

The first thing we’ll do is create a simple web site. To go with the theme, I did that part via the command-line too. If you had, for example, an existing ASP.NET web site, you could use that instead.

c:\CmdlineWindowsAzure>md MyWebRole

c:\CmdlineWindowsAzure>copy con MyWebRole\index.html
<html><body><h1>Hello, World!</h1></body></html>^Z
        1 file(s) copied.

c:\CmdlineWindowsAzure>copy con MyWebRole\web.config
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <system.webServer>
        <defaultDocument enabled="true">
            <files>
                <clear />
                <add value="index.html" />
            </files>
        </defaultDocument>
    </system.webServer>
</configuration>^Z
        1 file(s) copied.

Notice that I created a minimal web.config, just to make sure index.html is configured as a default document.

Step 2: Creating a service definition

For Windows Azure to run our application, we need to describe it in a service definition file. Here, we tell Windows Azure that we have a single role, and that it’s a web role. We also specify that we want IIS to listen for incoming HTTP requests on port 80.

c:\CmdlineWindowsAzure>copy con ServiceDefinition.csdef
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="CmdlineWindowsAzure" xmlns="http://schemas.microsoft.com/ServiceH
osting/2008/10/ServiceDefinition">
  <WebRole name="MyWebRole" enableNativeCodeExecution="true">
    <InputEndpoints>
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
  </WebRole>
</ServiceDefinition>^Z
        1 file(s) copied.
Step 3: Configuring the application

Now we have a Windows Azure application, but we still need to create a configuration file for it. In a more advanced Windows Azure application, we might declare some configuration settings (such as storage connection strings), but in the case of our simple web site, the only thing we need to specify in our configuration file is how many instances of our web role we want to run.

This next command line actually combines two steps. We’re packaging the application so we can run it locally, and at the same time, we’re asking cspack to generate a basic configuration file for us via the generateConfigurationFile parameter. Once we have the basic configuration file, we edit it in Notepad and change the instance count to two.

c:\CmdlineWindowsAzure>cspack ServiceDefinition.csdef /role:MyWebRole;MyWebRole /copyOnly
/out:CmdlineWindowsAzure.csx /generateConfigurationFile:ServiceConfiguration.cscfg
Windows(R) Azure(TM) Packaging Tool version 1.2.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.


c:\CmdlineWindowsAzure>type ServiceConfiguration.cscfg
<?xml version="1.0"?>
<ServiceConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="htt
p://www.w3.org/2001/XMLSchema" serviceName="CmdlineWindowsAzure" xmlns="http://schemas.mic
rosoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="MyWebRole">
    <ConfigurationSettings />
    <Instances count="1" />
  </Role>
</ServiceConfiguration>
c:\CmdlineWindowsAzure>notepad ServiceConfiguration.cscfg

c:\CmdlineWindowsAzure>type ServiceConfiguration.cscfg
<?xml version="1.0"?>
<ServiceConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="htt
p://www.w3.org/2001/XMLSchema" serviceName="CmdlineWindowsAzure" xmlns="http://schemas.mic
rosoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="MyWebRole">
    <ConfigurationSettings />
    <Instances count="2" />
  </Role>
</ServiceConfiguration>
Step 4: Running the application in the development fabric

To run a Windows Azure application in the development fabric, it needs to packaged via cspack with the copyOnly parameter. This creates a directory with our roles in it instead of an encrypted and compressed package that we upload to the cloud. In the previous step, we actually created the local package we need already, and it’s now in CmdlineWindowsAzure.csx. (The convention is to use the .csx extension for this package/directory, but you can call it anything you want.)

To run it, we use the csrun command. It takes the local package (really a directory) and the configuration file. If you add the launchBrowser option, it will automatically open the browser to the HTTP endpoint we declared.

c:\CmdlineWindowsAzure>csrun CmdlineWindowsAzure.csx ServiceConfiguration.cscfg /launchBro
wser
Windows(R) Azure(TM) Desktop Execution Tool version 1.2.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

Using session id 1
Created deployment(631)
Started deployment(631)
Deployment input endpoint HttpIn of role MyWebRole at http://127.0.0.1:82/

If you try this, you should see the browser pop up and tell you “Hello, World” at this point.

We can check what’s running in the development fabric either by opening the development fabric UI, or by using csrun. We can also use csrun to delete the local deployment when we’re finished testing.

c:\CmdlineWindowsAzure>csrun /status
Windows(R) Azure(TM) Desktop Execution Tool version 1.2.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

Using session id 1
==================================================================
Deployment-Id: 631
EndPoint: http://127.0.0.1:82/
Roles:
  MyWebRole
    0 Started (ProcessId 5844)
    1 Started (ProcessId 9060)
Image-Location: c:\CmdlineWindowsAzure\CmdlineWindowsAzure.csx

c:\CmdlineWindowsAzure>csrun /remove:631
Windows(R) Azure(TM) Desktop Execution Tool version 1.2.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

Using session id 1
Stopping deployment 631.
Removing deployment 631.
Step 5: Deploying to the cloud

We invoke cspack again to create a package ready for the cloud. This time, we don’t specify generateConfigurationFile, since we already have one, and we don’t specify copyOnly because we want a package for the cloud, not for the development fabric. The usual convention is to use the extension .cspkg for the output package file.

c:\CmdlineWindowsAzure>cspack ServiceDefinition.csdef /role:MyWebRole;MyWebRole /out:Cmdli
neWindowsAzure.cspkg
Windows(R) Azure(TM) Packaging Tool version 1.2.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.


c:\CmdlineWindowsAzure>

At this point, we have a .cspkg file and a .cscfg file, which are the two files we need to provide when we deploy via the Windows Azure Portal (or MMC snap-in, or PowerShell scripts, or csmanage tool, etc.).

More Information

For more information, visit the Windows Azure SDK Tools Reference topic on MSDN, and specifically the documentation for cspack and csrun.

You might also want to watch this week’s Cloud Cover episode, where we use the command line tools to deploy a more interesting application.


Jim O’Neill continued his series with Azure@home Part 8: Worker Role and Azure Diagnostics on 10/7/2010:

This post is part of a series diving into the implementation of the @home With Windows Azure project, which formed the basis of a webcast series by Developer Evangelists Brian Hitney and Jim O’Neil.  Be sure to read the introductory post for the context of this and subsequent articles in the series.

Over the past five posts in this series, I’ve gone pretty deep into the implementation (and improvements) to the Web Role portion of the Azure@home project.  The Web Role contains only two ASPX pages, and it isn’t even the core part of the application, so there’s no telling how many posts the Worker Role will lead to! 

For point of reference, we’re now concentrating on the the highlighted aspect of the solution architecture I introduced in the first post of this series – the Worker Role is on the right and the Web Role on the left.  One or more instances of the Worker Role wrap the Folding@home console application, launch it, parse its output, and record progress to both Azure table storage and a service endpoint hosted at http://distributed.cloudapp.net

Worker Role in Azure@home

In Windows Azure, a worker role must extend the RoleEntryPoint class, which provides callbacks to initialize (OnStart), run (Run), and stop (OnStop) instances of that role; a web role, by the way, can optionally extend that same class.  OnStart (as you might tell by its name!) is a good place for code to initialize configuration settings, and in the Worker Role implementation for Azure@home, we also add some code to capture performance and diagnostics output via the Windows Azure Diagnostics API

As one of the core services available to developers on the Windows Azure platform, Windows Azure Diagnostics facilitates collecting system and application logs as well as performance metrics for the virtual machine on which a role is running (under full-trust).  This capability is useful in troubleshooting, auditing, and monitoring the health of your application, and beyond that can form the basis of an auto-scaling implementation that is specifically aware of and tailored to the execution profile and characteristics of your services.

DiagnosticMonitor

The control flow for capturing diagnostics and performance metrics in Azure is depicted below in an image adapted from Matthew Kerner’s PDC 2009 presentation Windows Azure Monitoring, Logging, and Management APIs.

Windows Azure Diagnostic Monitor architecture

The role code you write, represented by the dark green box, coexists inside your role’s VM instance along with a Diagnostic Monitor, shown in the red box.  The Diagnostic Monitor is encapsulated in a separate process – MonAgentHost.exe -  that collects the desired diagnostics and performance information in local storage and interfaces with Windows Azure storage to persist that data to both tables and blobs.

In your role code, the DiagnosticMonitor class provides the means for interfacing with that external process, specifically via two static methods of note:

DiagnosticMonitorConfiguration

Most of the work in setting up monitoring involves specifying what you want to collect and how often to collect it. – once that’s done all you have to do is call Start!

“What and how often” is configuration information stored in an instance of the DiagnosticMonitorConfiguration class, generally initialized in your role code by a call to GetDefaultInitialConfiguration.  The default configuration tracks the following items:

  • Windows Azure logs,
  • IIS 7.0 logs (for web roles only), and
  • Windows Diagnostic infrastructure logs.

Of course you can augment that with additional logging – IIS failed requests, windows event logging, performance monitoring statistics, and custom logs –  by adding to that default configuration object.  Where and how to configure each item though can be a little confusing, and I’ve found walking through the class diagram comprising DiagnosticMonitorConfiguration helped me wrap my head around it all.  Below is the diagram, showing the relationships between that core class’ properties and the other ancillary classes that indicate what and how often to collect the various bits of information (click the diagram to enlarge it).

Windows Azure diagnostic monitoring classes

Note that there are five properties of DiagnosticMonitorConfiguration that contain information about some type of statistics gathering.  All allow you to individually specify a BufferQuotaInMB and ScheduledTransferPeriod

Lastly, the CrashDumps class seems a bit of a loner!  The ultimate location of the dump files is actually part of the DiagnosticMonitorConfiguration discussed above, but to signal you’re interested in collecting them, you must call either the EnableCollection or EnableCollectionToDirectory methods of this static class.

Across the board, the OverallQuotaInMB (on the DiagnosticMonitorConfiguration class) is just under 4GB by default, and refers to local file storage on the VM instance in which the web or worker role is running.  Since local storage is limited, transient, and not easily inspectable, the logs and diagnostics that you are interested in examining should be periodically copied to Windows Azure storage.  That can be done regularly – hence the ScheduledTransferPeriod property – or on demand (via the RoleInstanceDiagnosticManager as shown in this MSDN example).

When data is transferred to Azure storage, where does it go?  The table below summarizes that and includes the property of DiagnosticMonitorConfiguration you need to tweak to adjust the nature and the frequency of the data that is transferred.

image

Note: WADDirectoriesTable maintains a single entity (row) per log file stored in blob storage.

There are a number of utilities (some free) out there to peruse Azure storage – here’s a screen shot of my performance counters viewed using Cerebrata’s Cloud Storage Studio:

Cloud Storage Studio

43 lines of C# code excised for brevity.

DiagnosticMonitorTraceListener

There’s actually one additional cog necessary to have all this diagnostics and monitoring work for you, but it’s configured automatically when you create a web or worker role in Visual Studio (as is the code to start the diagnostics monitor with the default configuration!).   Within the app.config for a worker role or the web.config for a web role, you need to configure a DiagnosticMonitorTraceListener to hook in to the familiar Trace functionality; here’s the contents of Azure@home’s Worker Role app.config.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <system.diagnostics>
        <trace>
            <listeners>
                <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, 
Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0,
Culture=neutral, PublicKeyToken=31bf3856ad364e35"
                    name="AzureDiagnostics">
                    <filter type="" />
                </add>
            </listeners>
        </trace>
    </system.diagnostics>
</configuration>

Bonus Section:  External Diagnostics

Let’s take all this diagnostics monitoring up a notch!  The diagnostics monitor is a separate process (MonAgentHost.exe) which not only limits its impact on your own code running but also opens up the option to monitor externally.   What if you don’t want to collect all that data all the time, or what if you have clients that are seeing a performance issue right now?  You really don’t want to bring down the application, re-instrument it, re-provision it, and hope the problem persists; it’s highly doubtful your customer will have the patience for you to do all that anyway!

Earlier in this post, I alluded to the ability to transfer logs and diagnostics to Azure storage on demand, as you might do for defined checkpoints in your application’s workflow.  That functionality is enabled by the RoleInstanceDiagnosticManager class, which is also the linchpin of remote diagnostics.  As shown below, configuring remote diagnostics involves interacting with the diagnostics configuration stored in Windows Azure storage – specifically in a blob container called wad-control-container.  

Remote Azure Diagnostics workflow

It’s roughly a five step-process:

  • Instantiate a DeploymentDiagnosticManager instance given Windows Azure storage account credentials and a specific Azure deployment id.  The deployment ID is 32-character hexadecimal value displayed on the Azure portal for the given hosted service deployment; for the local development fabric it’s the string deployment followed by a integer in parentheses – like deployment(479).
  • Get a set of or single RoleInstanceDiagnosticManager instance corresponding to a given role name and, optionally, instance id of the Azure compute instances you’re interested in configuring.
  • Access the configuration for those instances (via GetCurrentConfiguration), yielding a DiagnosticMonitorConfiguration instance, which was described in gory detail earlier in this post. 
  • Make the desired changes to that configuration – adding or removing logging and tracing options as well as performance counters – via the various DiagnosticDataBufferConfiguration classes.
  • Update the configuration (via SetCurrentConfiguration) to push the changes back to Windows Azure blob storage.

The diagnostics manager will then pick up the changes the next time the configuration is polled (as specified by ConfigurationChangePollInterval).

Don’t forget to set a transfer period if you’re introducing a new counter or log configuration; otherwise, you’ll be happily collecting information into local storage but never see it appear in the expected tables or blob containers in your Windows Azure diagnostics storage account. 

To bring it all together, here’s a complete console application implementation that accepts a deployment id on the command line and sets up a performance counter to collect the CPU utilization every 30 seconds for all instances of a role called “WebRole” and transfer that data to the WADPerformanceCountersTable every 10 minutes.

C# code excised for brevity.


The Windows Azure Team posted Congressional House Republican Caucus Wins Altimeter Group "Creating Impact" Award for its Windows Azure-powered AmericaSpeakingOut.org on 10/7/2010:

image In conjunction with its Rise of Social Commerce event yesterday in Palo Alto, CA, Altimeter Group announced that the Congressional House Republican Caucus has been recognized for its "Creating Impact" Open Leadership Award for AmericaSpeakingOut.org (ASO).  The awards recognize the organizations and individuals using social technologies to transform their organizations.

imageCongressional House Republicans (GOP) partnered with Microsoft to create ASO, which was designed to increase the dialogue between Americans and their Congress through a virtual town hall environment.  Built on Microsoft TownHall, which runs on the Windows Azure platform, the goal of ASO was to provide citizens with an opportunity to share their ideas with the Congressional House GOP members.  In the four months since the site launched, citizens have proposed 15,000 ideas for federal legislation and cast more than 500,000 votes through ASO.

image You can read more about the Open Leadership Awards and watch the Rise of Social Commerce conference live here.

TownHall is available for free, and the source code can be download from the Microsoft Code Gallery website.


Sinteractive reported Open Government Vision Continues to Flourish Through Data.gov in this 10/7/2010 press release on Marketwire:

image Synteractive, a leader in strategy consulting and business solutions, has been tapped to partner in building a new cloud-based dataset hosting solution for Data.gov using Microsoft's technology Windows Azure, SQL Azure, SharePoint 2010, and Bing.

imageThis partnership is led by Smartronix and is comprised of Smartronix, Synteractive, KPMG and TMP Government. Team Smartronix will work with the General Services Administration (GSA) in building a virtual dataset hosting platform designed to improve constituents' ability to access and consume important Federal data. This same team partnered with Microsoft in orchestrating Recovery.gov 2.0.

image"To realize the administration's vision of Open Government, citizens need access to data that is easy to understand, and easy to work with," said Evan Burfield, Chairman and CEO of Synteractive, the 2010 Microsoft Federal Partner of the Year. "This cloud-based hosting model will facilitate citizen engagement while reducing data management complexities for Federal agencies."

imageThe solution is an innovative, interoperable architecture built around services such as Microsoft Codename Dallas which are ideal for open government data. Dallas is an information marketplace that has the ability to bring data, imagery, and real-time web services from government sources together into a single location.

About Synteractive
Specializing in both the public and private sectors, Synteractive partners with you to find the simplest and most effective solutions to your toughest business problems through social and technological innovation. For more information, please visit www.Synteractive.com.


Joseph Fultz wrote Performance-Based Scaling in Windows Azure for MSDN Magazine’s October 2010 issue:

Download the Code Sample

Without a doubt, cloud computing is gaining lots of mindshare, and its practical use is building momentum across technology platforms and throughout the industry. Cloud computing isn’t a new or revolutionary concept; indeed, it has been around for years in the form of shared hosting and other such services. Now, however, advances in technology and years of experience running servers and services have made cloud computing not only technically practical, but increasingly interesting to both consumers and providers.

Progress in cloud computing will reach beyond IT and touch every part of your company—from the people managing hardware and services, to the developers and architects, to the executives who will approve the budget and pay the bills. You’d better be prepared for it.

In this column I’ll focus primarily on the developers and architects who need to understand and leverage cloud computing in their work. I’ll supply some guidance on how to accomplish a given task, including notes on architecture considerations and their impact on cost and performance. Please tell me what you think of the topics I cover and, even more importantly, about topics that are of particular interest in cloud computing.

Seeding the Cloud

One of the first benefits people focus on in cloud computing is the idea that application owners don’t have to worry about infrastructure setup, configuration or maintenance. Let’s be honest: that’s pretty compelling.

However, I think it’s more important to focus on the ability to scale up or down to serve the needs of the application owner, thereby creating a more efficient cost model without sacrificing performance or wasting resources. In my experience, demand elasticity is something that comes up in any conversation about the cloud, regardless of the platform being discussed.

In this installment I’ll demonstrate how to use performance counter data from running roles to automate the process of shrinking or growing the number of instances of a particular Web Role. To do this, I’ll take a look at a broad cross-section of Windows Azure features and functionality, including Windows Azure Compute, Windows Azure Storage and the REST Management API.

The concept is quite simple: test collected performance data against a threshold and then scale the number of instances up or down accordingly. I won’t go into detail about collecting diagnostic data—I’ll leave that to you or to a future installment. Instead, I’ll examine performance counter data that has been dumped to a table in Windows Azure Storage, as well as the code and setup required to execute the REST call to change the instance count in the configuration. Moreover, the downloadable code sample will contain a simple page that will make REST management calls to force the instance count to change based on user input. The scenario is something like the drawing in Figure 1.

image: Performance-Based Scaling

Figure 1 Performance-Based Scaling

Project Setup

To get things started, I created a Windows Azure Cloud Service project that contains one Worker Role and one Web Role. I configured the Web Role to publish performance counter data, specifically % Processor Time, from the role and push it to storage every 20 seconds. The code to get that going lives inside of the WebRole::OnStart method and looks something like this:

  1. var performanceConfiguration =
  2.   new PerformanceCounterConfiguration();
  3. performanceConfiguration.CounterSpecifier =
  4.   @"\Processor(_Total)\% Processor Time";
  5. performanceConfiguration.SampleRate =
  6.   System.TimeSpan.FromSeconds(1.0);
  7.            
  8. // Add the new performance counter to the configuration
  9. config.PerformanceCounters.DataSources.Add(
  10.   performanceConfiguration);
  11. config.PerformanceCounters.ScheduledTransferPeriod =
  12.   System.TimeSpan.FromSeconds(20.0);

This code registers the performance counter, sets the collection interval for data and then pushes the data to storage. The values I used for intervals work well for this sample, but are not representative of values I’d use in a production system. In a production system, the collection interval would be much longer as I’d be concerned with 24/7 operations. Also, the interval to push to storage would be longer in order to reduce the number of transactions against Windows Azure Storage.

Next I create a self-signed certificate that I can use to make the Azure REST Management API calls. Every request will have to be authenticated and the certificate is the means to accomplish this. I followed the instructions for creating a self-signed certificate in the TechNet Library article “Create a Self-Signed Server Certificate in IIS 7” (technet.microsoft.com/library/cc753127(WS.10)). I exported both a .cer file and a .pfx file. The .cer file will be used to sign the requests I send to the management API and the .pfx file will be imported into the compute role via the management interface (see Figure 2).

image: Importing Certificates

Figure 2 Importing Certificates

I’ll come back later and grab the thumbprint to put it in the settings of both the Web Roles and Worker Roles that I’m creating so they can access the certificate store and retrieve the certificate.

Finally, to get this working in Windows Azure, I need a compute project where I can publish the two roles and a storage project to which I can transfer the performance data. With these elements in place, I can move on to the meat of the work.

Is It Running Hot or Cold?

Now that I’ve got the Web Role configured and code added to publish the performance counter data, the next step is to fetch that data and compare it to a threshold value. I’ll create a TestPerfData method in which I retrieve the data from the table and test the values. I’ll write a LINQ statement similar to the following:

  1. double AvgCPU = (
  2.   from d in selectedData
  3.   where d.CounterName ==
  4.     @"\Processor(_Total)\% Processor Time"
  5.   select d.CounterValue).Average();

By comparing the average utilization, I can determine the current application performance. If the instances are running too hot, I can add instances. If they’re running cold and I’m wasting resources—meaning money—by having running instances I don’t need, I can reduce the number of instances.

You’ll find in-depth coverage of the code and setup needed to access the performance counter table data in a blog post I wrote at blogs.msdn.com/b/joseph_fultz/archive/2010/06/30/querying-azure-perf-counter-data-with-linq.aspx. I use a simple if-then-else block to assess the state and determine the desired action. I’ll cover the details after I’ve created the functions needed to change the running service configuration.

Using the REST Management API

Before I can finish the TestPerfData method, I have a little more work to do. I need a few methods to help me discover the number of instances of a given role, create a new valid service configuration for that role with an adjusted instance count, and, finally, allow me to update the configuration.

To this end I’ve added a class file to my project and created the six static methods shown in Figure 3.

Figure 3 Configuration Methods

image

The calls that interact with the REST Management API must include a certificate. To accomplish this, the certificate is added to the hosted service and the thumbprint is added to the role configuration and used to fetch the certificate at run time. Once the service and role are configured properly, I use the following code to grab the certificate from the Certificate Store:

  1. string Thumbprint =
  2.   RoleEnvironment.GetConfigurationSettingValue(
  3.   ThumbprintSettingName);
  4. X509Store certificateStore =
  5.   new X509Store(StoreName.My, StoreLocation.LocalMachine);
  6. certificateStore.Open(OpenFlags.ReadOnly);
  7. X509Certificate2Collection certs =
  8.   certificateStore.Certificates.Find(
  9.   X509FindType.FindByThumbprint, Thumbprint, false);

This is the main code of the LookUpCertificate method and it’s called in the methods where I want to interact with the REST API. I’ll review the GetDeploymentInfo function as an example of how calls are constructed. For this example, I’ve hardcoded some of the variables needed to access the REST API:

  1. string x_ms_version = "2009-10-01";
  2. string SubscriptionID = "[your subscription ID]";
  3. string ServiceName = "[your service name]";
  4. string DeploymentSlot = "Production";

I need to create a HttpWebRequest with the proper URI, set the request headers and add my certificate to it. Here I build the URI string and create a new HttpWebRequest object using it:

  1. string RequestUri = "https://management.core.windows.net/" +
  2.   SubscriptionID + "/services/hostedservices/"+
  3.   ServiceName + "/deploymentslots/" + DeploymentSlot;
  4.  
  5. HttpWebRequest RestRequest =
  6.   (HttpWebRequest)HttpWebRequest.Create(RequestUri);

For the call to be valid, it must include the version in the header. Thus, I create a name-value collection, add the version key and data, and add that to the request headers collection:

  1. NameValueCollection RequestHeaders =
  2.   new NameValueCollection();
  3. RequestHeaders.Add("x-ms-version", x_ms_version);
  4. if (RequestHeaders != null) {
  5.   RestRequest.Headers.Add(RequestHeaders);
  6. }

The last thing to do to prepare this particular request is to add the certificate to the request:

  1. X509Certificate cert = LookupCertificate("RESTMgmtCert");
  2. RestRequest.ClientCertificates.Add(cert);

Finally, I execute the request and read the response:

  1. RestResponse = RestRequest.GetResponse();
  2. using (StreamReader RestResponseStream = new StreamReader(RestResponse.GetResponseStream(), true)) {
  3.   ResponseBody = RestResponseStream.ReadToEnd();
  4.   RestResponseStream.Close();
  5. }

That’s the general pattern I used to construct requests made to the REST Management API. The GetServiceConfig function extracts the Service Configuration out of the deployment configuration, using LINQ to XML statements like the following:

  1. XElement DeploymentInfo = XElement.Parse(DeploymentInfoXML);
  2. string EncodedServiceConfig =
  3.   (from element in DeploymentInfo.Elements()
  4. where element.Name.LocalName.Trim().ToLower() == "configuration"
  5. select (string) element.Value).Single();

In my code, the return of the GetServiceConfig is passed on to the GetInstanceCount or ChangeInstance count functions (or both) to extract the information or update it. The return from the ChangeInstance function is an updated Service Configuration, which is passed to ChangeConfigFile. In turn, ChangeConfigFile pushes the update to the service by constructing a request similar to the previous one used to fetch the deployment information, with these important differences:

  1. “/?comp=config” is added to the end of the URI
  2. The PUT verb is used instead of GET
  3. The updated configuration is streamed as the request body
Putting It All Together

With the functions in place to look up and change the service configuration, and having done the other preparatory work such as setting up counters, configuring the connection string settings for Storage, and installing certificates, it’s time to implement the CPU threshold test.

The Visual Studio template produces a Worker Role that wakes up every 10 seconds to execute code. To keep things simple, I’m leaving that but adding a single timer that will run every five minutes. In the timer, a simple conditional statement tests whether utilization is higher or lower than 85 percent, and I’ll create two instances of the Web Role. By doing this I guarantee that the number of instances will definitely decrease from the initial two instances to a single instance.

Inside the Worker Role I have a Run method that declares and instantiates the timer. Inside of the timer-elapsed handler I add a call to the TestPerfData function I created earlier. For this sample, I’m skipping the implementation of the greater-than condition because I know that the CPU utilization will not be that high. I set the less-than condition to be less than 85 percent as I’m sure the counter average will be lower than that. Setting these contrived conditions will allow me to see the change via the Web management console or via Server Explorer in Visual Studio.

In the less-than-85-percent block I check the instance count, modify the service configuration and update the running service configuration, as shown in Figure 4.

Figure 4 The Less-Than-85-Percent Block

  1. else if (AvgCPU < 85.0) {
  2.   Trace.TraceInformation("in the AvgCPU < 25 test.");
  3.   string deploymentInfo =
  4.     AzureRESTMgmtHelper.GetDeploymentInfo();
  5.   string svcconfig =
  6.     AzureRESTMgmtHelper.GetServiceConfig(deploymentInfo);
  7.   int InstanceCount =
  8.     System.Convert.ToInt32(
  9.     AzureRESTMgmtHelper.GetInstanceCount(
  10.     svcconfig, "WebRole1"));
  11.   if (InstanceCount > 1) {
  12.     InstanceCount--;
  13.     string UpdatedSvcConfig =
  14.       AzureRESTMgmtHelper.ChangeInstanceCount(
  15.       svcconfig, "WebRole1", InstanceCount.ToString());
  16.     AzureRESTMgmtHelper.ChangeConfigFile(UpdatedSvcConfig);
  17.   }
  18. }

I make sure to check the instance count before adjusting down, because I don’t want it to go to zero, as this is not a valid configuration and would fail.

Running the Sample

I’m now ready to execute the example and demonstrate elasticity in Windows Azure. Knowing that my code is always right the first time—ahem—I right-click on the Cloud Service Project and click Publish. The dialog gives you the option to configure your credentials, which I’ve already done (see Figure 5).

image: Publishing the Project

Figure 5 Publishing the Project

I click OK and just have to wait for the package to be copied up and deployed. When deployment is complete, I switch to the Web management console and see two Web Roles and one Worker Role running, as shown in Figure 6.

image: Two Web Roles and One Worker Role

Figure 6 Two Web Roles and One Worker Role

I wait for the timer event to fire, executing the code that will determine that the average CPU utilization is less than 85 percent, and decrement the WebRole1 instance count. Once this happens, the management page will refresh to reflect an update to the deployment.

Because I’m using small VMs, changing the count by only one, and the application is lightweight (one .aspx page), the update doesn’t take long and I see the final, auto-shrunk deployment as shown in Figure 7.

image: Now One Web Role and One Worker Role

Figure 7 Now One Web Role and One Worker Role

Blue Skies

I want to share few final thoughts about the sample in the context of considering a real implementation. There are a few important points to think about.

First, the test is trivial and contrived. In a real implementation you’d need to evaluate more than simple CPU utilization and you’ll need to take into account the quantum over which the collection occurred.

In addition, you need to evaluate the costs of using Windows Azure Storage. Depending on the solution, it might be advisable to scrub the records in the table for only ones that are of interest. You can decrease the upload interval to lower transaction costs, or you may want to move the data to SQL Azure to minimize that cost.

You also need to consider what happens during an update. A direct update will cause users to lose connectivity. It may be better to bring the new instances up in staging and then switch the virtual IP address. In either case, however, you’ll have session and viewstate problems. A better solution is to go stateless and disable the test during scale adjustments.

That’s it for my implementation of elasticity in Windows Azure. Download the code sample and start playing with it today.

Joseph Fultz is an architect at the Microsoft Technology Center in Dallas where he works with both Enterprise Customers and ISVs designing and prototyping software solutions to meet business and market demands. He’s spoken at events such as Tech•Ed and similar internal training events.

Thanks to the following technical expert for reviewing this article: Suraj Puri


Pakt Publishing announced the availability of Microsoft Azure: Enterprise Application Development by Richard J. Dudley and Nathan Duchene in print and eBook format in December 2010:

imageMicrosoft's Azure platform has proved itself to be a highly scalable and highly available platform for enterprise applications. Despite a familiar development model, there is a difference between developing for Azure and moving applications and data into the cloud. You need to be aware of how to technically implement large-scale elastic applications. In this book, the authors develop an Azure application and discuss architectural considerations and important decision points for hosting an application on Azure.

imageThis book is a fast-paced introduction to all the major features of Azure, with considerations for enterprise developers. It starts with an overview of cloud computing in general, followed by an overview of Microsoft's Azure platform, and covers Windows Azure, SQL Azure, and AppFabric, discussing them with the help of a case-study.

The book guides you through setting up the tools needed for Azure development, and outlines the sample application that will be built in the later chapters. Each subsequent chapter focuses on one aspect of the Azure platform—web roles, queue storage, SQL Azure, and so on—discussing the feature in greater detail and then providing a programming example by building parts of the sample application. Important architectural and security considerations are discussed with each Azure feature.

image722[3]The authors cover topics that are important to enterprise development, such as transferring data from an on-premises database to SQL Azure using SSIS, securing an application using AppFabric access control, blob and table storage, and asynchronous messaging using Queue Storage. Readers will learn to leverage the use of queues and worker roles for the separation of responsibilities between web and worker roles, enabling linear scale out of an Azure application through the use of additional instances. A truly "elastic" application is one that can be scaled up or down quickly to match resources to demand as well as control costs; with the practices in this book you will achieve application elasticity.

Develop large scale elastic applications on the Microsoft cloud platform

What you will learn from this book :

  • Explore the major features of Azure
  • Examine the differences between Azure development and traditional application development
  • Integrate with an on-premises database using SSIS
  • Utilize blob, table, and queue storage
  • Develop web and worker roles
  • Log application diagnostics and events
  • Create a WCF service in a web role
  • Review considerations for moving an application and data into the cloud
  • Create a Windows Forms application, and integrate it with web services using Visual Studio
  • Monitor your application's performance using Azure Diagnostics
Approach

This fast-paced guide enables developers to design and build Enterprise applications for the cloud. You will find it easy to follow this book, as the authors use an actual online portal application for the case study. Throughout the development of the sample application there is discussion of important considerations for moving an application into the cloud.

Who this book is written for

If you are a developer or architect who wants to build enterprise-level applications with Azure, then this is the perfect book for you! Since the examples are in .NET, the book will skew to MS-oriented developers. But a lot of what is discussed will be applicable to anyone wanting to work with Azure. No matter what language you use, you provision the application fabric the same way, and all the underlying concepts will be the same. You will need experience with Visual Studio, and some basic SQL Server knowledge.


<Return to section navigation list> 

Visual Studio LightSwitch

imageimage222[4][2]No significant articles today.


Return to section navigation list> 

Windows Azure Infrastructure

Adron Hall (@adronbh) continued his What You Need and Want With Windows Azure Part II series with a profusely illustrated tutorial of 10/8/2010:

image One of the most useful tools to use in Windows Azure Development is the Windows Azure MMC.  The Microsoft Management Console, or MMC, is the management console that many of the Windows Server Management interfaces can plug into.  The Windows Azure Team has put together the Windows Azure specific MMC Console Plugin that is available for download on Microsoft MSDN Code Site at http://code.msdn.microsoft.com/windowsazuremmc.

Windows Azure MMC Code Site

Windows Azure MMC Code Site

When you navigate to the page, click on the tab for downloads and you will find three different files;

  • WindowsAzureMMC.exe
  • PerfMon-Friendly Log Viewer Plugin
  • PerfMon-Friendly Log Viewer Plugin (Source Only)

The main file you’ll need to download is the WindowsAzureMMC.exe file.  Once this file is downloaded, run the executable.  An installation wizard will appear, just click next and step through each of the steps accepting any defaults.

Windows Azure Management Tool Installation

Windows Azure Management Tool Installation

Once the executable runs it should pop up a Windows Explorer Window, if not navigate to where the files where just installed (unzipped) to.  By default the installer places them in C:\WindowsAzureMMC\.  Find the file StartHere.cmd located in the installation directory and fun the file.

StartHere.cmd

StartHere.cmd

When the file is executed a DOS prompt will flicker, and another configuration wizard titled Windows Azure Management Tools will appear.

Configuration Wizard for the Windows Azure Management Tools

Configuration Wizard for the Windows Azure Management Tools

Click next and the installation will start, checking each of the dependencies required to execute the MMC.

Detecting Required Software

Detecting Required Software

Continue to click any next prompts, and then you will have the Windows Azure MMC Console open once the StartHere.cmd finishes executing.  Click close on the configuration wizard.

Installation Completed

Installation Completed

The MMC will now be displayed on screen as shown below.

Windows Azure Management Tool (MMC)

Windows Azure Management Tool (MMC)

Open the Windows Azure Management section by clicking on the small tree view arrow on the left hand side.  The tree view will open up to a Service Management Node with a Hosted Services, Storage Services, and Affinity Groups listed underneath the node.  Select the actual Service Management Node so that the middle window shows the connection form shown here.

Windows Azure MMC Services Management Node Connection

Windows Azure MMC Services Management Node Connection

Now navigate back to Windows Azure Platform Web Interface (http://windows.azure.com).  Click on the project displayed on the main screen to select it.

Windows Azure Portland Interface

Windows Azure Portland Interface

When the follow page displays, click on the Account tab at the center top of the page.

Windows Azure Project

Windows Azure Project

When the Account Page finishes rendering look at the very bottom to locate the subscription ID.

Windows Azure Project Account Properties

Windows Azure Project Account Properties

Enter the subscription ID into the form.  Now click on the ellipsis button on the form so the certificates that are available are displayed.

API Certificate

API Certificate

Click on the underlined link on the certificate you want to use (sometimes there are a few options, depending on what is installed on the machine already).  A properties dialog should appear when you click on the underlined link button.

Certificate Details

Certificate Details

Click on the Details Tab on the top of the properties dialog window.

Certificate Details, Details Tab

Certificate Details, Details Tab

Now click on the Copy to File Button.  An export process will start for the certificate.  Click on next.  On the next screen make sure No, do not export the private key is selected.  The next screen selects the DER encoded binary X.509 (.CER) option.  Verify this setting and then click next.

Certificate Export Wizard

Certificate Export Wizard

Click on next and then enter the path and filename where you want to save the certificate.

Save As File Name for Certificate

Save As File Name for Certificate

Now that you have the certificate, return to the Windows Azure Platform Web Interface (http://windows.azure.com).  Navigate to the Account Tab section of the site again.  On that page click on the Manage My API Certificates.  This is the same page as shown above in the image captioned “Windows Azure Project Account Properties“.  Once the page displays click on the Choose File button on the page.  Find the location the certificate was saved and select the certificate.  Now click the upload button to upload the file to the Windows Azure Account.

API Certificates Upload

API Certificates Upload

When the file is done uploading the page will update and show something similar to what is shown in the next screenshot.

API Certificates, Finished Uploading

API Certificates, Finished Uploading

Now you can click on the OK button, if you haven’t already, to confirm the API Certificate in the Windows Azure MMC.  Click on the Connect button in the far right window area of the MMC.  It should take a second but the connection should occur.  You can tell by the Default storage account form drop down becomes enabled.  At this time though, since we haven’t placed anything in storage or started any storage services there will be nothing displayed in the drop down.

At this point the MMC is functional; there just isn’t much to look at in the Windows Azure Account yet.  So let’s change that and setup some sample services.  First head back over to the Windows Azure Platform Web Interface (http://windows.azure.com).  Once you’re in click on the project as we did before so that it is the focus point.  Click on the +New Service link in the top right of the main page window section.

Starting a Windows Azure Service

Starting a Windows Azure Service

The next screen will display the options to create a Windows Azure Storage Account or a Hosted Services Role.  Click on the Windows Azure Storage Account option.

Windows Azure Create a Service

Windows Azure Create a Service

On the screen that renders fill out the service label and description.  Both of these fields are mostly free text, allowing spaces and special characters.  Click next when you have filled out the label and description.

Services Properties

Services Properties

The next form that comes up has the public storage account name.  This field must be compliant with URI naming conventions.  The idea also is that these storage services use a RESTful API, it is best to follow the REST Architecture ideals and name the location something easy to read and to remember.  You can click the check availability button to verify if the name is used or not.  If it is available move down and select Anywhere US for the region.

First Storage Sample

First Storage Sample

Once you are finished click the create button at the bottom of the form.  The next window will render the results of creating the Windows Azure Storage Account.  This page has all the information you’ll need to fill out the Windows Azure MMC connection information.

Windows Azure Storage Account Properties

Windows Azure Storage Account Properties

If you still have the Windows Azure MMC open, bring focus to it again.  If not open it back up and open the Windows Azure Management tree view back to the Service Management Node and verify or enter the information for the subscription ID and API Certificate.  Now click on the Connect link button on the right hand window pane.  The MMC will then connect and will populate the Default storage account drop down.  Click on the drop down and you will see your Windows Azure Storage Account that we just created.

Service Management Node Connected with Default Storage Account

Service Management Node Connected with Default Storage Account

Now that we have a Windows Azure Storage Account, let’s get a Windows Azure Services Role running also.  Navigate back to the Windows Azure Platform Web Interface (http://windows.azure.com).  Once the page has rendered click on your specific project, wait for that page to render and  then on the +New Service link. This time select the Windows Azure Services Role to create.  On the next page fill out the service label and description the same as with the Windows Azure Storage Account creation.

Create a Service (Role)

Create a Service (Role)

Click next when complete.  On the next page that renders you’ll again pick a public URI subdomain path, which I’ve used firstservicesample as mine, and select Anywhere US from the drop down for the region.  Click on the create button when complete.  The following page will display with a single cube image in the center of the screen, label Production.  For now, the instance role is available, but nothing is deployed and nothing is being charged at this time.  However this is perfect for checking out the Windows Azure MMC display of the services.

Service Role

Service Role

Return to the Windows Azure MMC and click on the Hosted Services node.  Click on Connect in the right hand window pane under actions.  The firstservicesample node, or whatever you may have named the service, will display with the staging, production, and certificates nodes appearing below.  From here you can deploy, upgrade, run, delete, suspend, swap, or even save the configuration of your roles.  This is extremely helpful so that one doesn’t always need to return to the site and can maintain multiple hosted services, storage services, affinity groups, and more from the MMC.

Windows Azure Staging

Windows Azure Staging

Next let’s click on the Storage Explorer Node just below the Service Management Node.  Click on New Connection and enter the Account Name as shown below.

New Account Form

New Account Form

Now that the account name and URIs are filled out.  Return to the storage services properties page in the Windows Azure Platform Web Interface (http://windows.azure.com).

Windows Azure Storage Account Properties

Windows Azure Storage Account Properties

On this page you’ll find the key you need to finish off the form in the Windows Azure MMC.  Once you’ve completed the form click on OK.  The MMC should now populate out the cloud storage account area with a node for BLOB Containers.

Storage Account Services

Storage Account Services

Click on the BLOG Containers so and click on Add Container in the right hand side window pane under actions.  Enter a name, in this case I entered musicmanager, and click OK.  Now you should have a BLOB Storage Container in your Windows Azure Storage Account.

Windows Azure BLOB Container

Windows Azure BLOB Container

Click on the musicmanager BLOB Container, or whatever you named yours, and then click on Upload BLOB under the actions window on the right hand side.  Select a file, I’ve chosen a music MP3 I have on my local machine.

Uploading a BLOG File (A Music MP3)

Uploading a BLOG File (A Music MP3)

Click OK and you’ll see the Operations queue node on the lower left hand side of the Console Root tree view populate with the upload activity task.

The Upload in the Operations Queue

The Upload in the Operations Queue

After the upload is complete the BLOB Container will then show the BLOBs just below it when selected in the Windows Azure MMC.

Bravo, Adron!


The HPC in the Cloud blog quotes an IBM Survey: IT Professionals Predict Mobile and Cloud Technologies Will Dominate Enterprise Computing By 2015 in a 10/8/2010 post to the This Just In blog:

image Information technology professionals predict that mobile and cloud computing will emerge as the most in-demand platforms for software application development and IT delivery over the next five years, according to a new IBM survey released today.

image The 2010 IBM Tech Trends Survey, conducted online by IBM developerWorks, provides insight into the most significant enterprise technology and industry trends based on responses from 2,000 IT developers and specialists across 87 countries. 

According to the survey, more than half of all IT professionals – 55 percent -- expect mobile software application development for devices such as iPhone and Android, and even tablet PCs like iPad and PlayBook, will surpass application development on all other traditional computing platforms by 2015.

With the proliferation of these mobile devices, industry analysts are predicting mobile applications sales will undergo massive growth over the next three years, with estimates of mobile application revenues expanding from $6.2 billion this year to nearly $30 billion by 2013.

Supporting the growing number of software developers creating new applications for mobile devices, IBM now offers no-cost mobile computing technology resources, through IBM developerWorks, for application development on mobile platforms such as iPhone, iPad, HTML5 and Android.

IBM also today launched the first developerWorks mobile application for the Apple iPhone, providing developers around the world with mobile access to build skills and network with colleagues using the professional social networking platform, My developerWorks, built on IBM Lotus Connections.

Additional IBM Tech Trends Survey findings include:

  • 91 percent anticipate cloud computing will overtake on-premise computing as the primary way organizations acquire IT over the next five years
  • Mobile and cloud computing are followed by social media, business analytics and industry-specific technologies as the hottest IT career opportunities beginning in 2011
  • 90 percent believe it is important to possess vertical industry-specific skills for their jobs, yet 63 percent admit they are lacking the industry knowledge needed to remain competitive

Read more: 2 | 3 All »


Michael Coté (@cote) second’s IBM assessment [above] in his “What’s hot right now?” 3 Tech Picks post:

Eggs

image I’m often asked “what’s hot now?” And why the hell not? I was asked most recently by Issac Roth at Makara at the Rackspace SaaS Summit during lunch yesterday. My focus tends to be more enterprise-y than consumer (I don’t spend too much on the cadre of “some dot com will buys us” business models).

Here’s what I generally tell people, expanded out beyond what my mouth can usually produce:

  • Cloud Computing – I spend a lot of time talking about this with folks, both on the buy side and the sell side (vendors). Vendors are all trying to ride the wave of cloud interest (cheaper, faster, more agile) and have either come up with genuine offerings or shimmied what they have (virtualization, management, etc.) into that category. Cloud is mostly understood to be “public” (Amazon, Rackspace, and the rest) or “private” (using cloud-inspired methods and technologies to run behind-the-firewall data centers). Most vendors recognize that the easier money and (more importantly) customer retention is in private cloud. Folks universally agree that Amazon is ahead of anyone else the public cloud space, and there’s some uncertainty about how much of the private cloud elephant companies can eat in 12 month transformation project chunks. Another thing I should write-up is the huge interest telcos are having in IaaS cloud technologies: these guys have piles of infrastructure they need to protect from Amazon & co. and seem to be going crazy buying IaaS clouds. For example, see the recent moves by KT with involvement from both Cloud.com and CloudScaling. And there’s starting to be action on the other end of the dumb pipes.
  • Mobile app development – developers I talk with are obsessed with the iPhone/iPad, or “iOS” as Apple has mercifully re-labeled their category. They’d love to develop for Android, which they feel is more open and “right” than iOS, but the gold rush is in Apple-land. While Hacker News might vote up a story every quarter pointing out the actual pennies on the dollar revenue in the Great App Game, developers still see the chance to cash in. These desires drive interest in mobile web (using web technologies for native apps or delivering mobile web apps), and a wider acceptance of the app store idea in (completely) different domains. Apple and Android dominate here: little is said (aside from a few snickers here and there) of Nokia, Samsung, MeeGo, Adobe, Microsoft, RIM, etc.
  • Elder Companies go Cocoon – the big tech companies like IBM, Oracle, HP, Microsoft, Cisco, and even “young folks” like VMWare are going bonkers with consolidation, portfolio shake-ups (“hey, we’re Cisco, wanna buy some servers?”), and otherwise doing something beyond collecting their tasty revenue streams. These companies used to have their ecosystems staked out, and then Cisco came along and started eating from HP, IBM, Dell, and other hardware folks’ buffets. Throw in Oracle buying Sun and recasting their story as Oracle vs. IBM along with Tennis-buddy-gate, some Java custody battles, and there’s just hijinks aplenty. The question here is where everyone will land, and what parts of the market each vendor will carve out for the next 10 years of boring but highly profitable revenue streams (for example, Dell has a window of opportunity to move into high-end servers). Scrappy youngin’s are hoping the new age of cloud and SaaS will just eat all the old folks lunch, and I sure like their optimism. It’s adorable.

Others

These are just the top three, at the moment. There are other longer-term hotnesses out there, and ones deeper in the stack, to pick a few:

  • Big Data/Analytics is a huge ice-berg floating out there. NoSQL is a sort of sibling here.
  • The possible demotion of the desktop/laptop as the primary computer device in favor of smart phones, tablets, and even Internet-connected TVs can piss away of hours of day-dreaming.
  • Figuring out sales and marketing automation to start using the web as a core store-front (or “point of customer engagement,” if you prefer) is taking over ISV go-to-market, and even getting some traction outside of tech. If you Google for everything, don’t you think your customers do too?
  • The great on-prem to SaaS rewrite is a bundle of cash, time, and fun waiting to happen if buyers can get over cloud-FUD and ISVs can figure out the business models behind it beyond those “we have to do it” imperatives that don’t quiet work in spreadsheet columns.
  • If you’re into this kind of thing, the open source world is oddly rudderless at the moment. Many of the same parties are still there, doing The Lords Work, but all this cloud and mobile business has shifted attention – and, more importantly, open source has gone mainstream, it’s how software is done. (And don’t even start on standards bodies. OAuth anyone?)

While that’s what I see at the moment, what are you, dear readers seeing? What do you spend your time thinking about?

Disclosure: IBM, Microsoft, Dell, Cloud.com, CloudScaling, Adobe, and VMWare are clients.

Michael is an industry analyst with RedMonk.


[Windows Azure Compute] [South Central US] reported [Yellow] Compute Service Degradation in South Central US on 10/8/2010:

image


Judith Hurwitz (@jhurwitz) listed Eight things that changed since we wrote Cloud Computing for Dummies in this 10/8/2010 post:

I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago.  As this anniversary approaches I thought it was appropriate to take a look back at what has changed.  I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news.  I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.

Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders.  A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was  passing fad that would disappear.  Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.

Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud.  Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies.  Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.

Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.

Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.

Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.

Change Six: Cloud Service Level Management is a  primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing.   Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.

Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.

Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.


John Bodkin observed the Microsoft board praised Steve Ballmer for “pursing new innovations that will position the Company to lead the transformation to the cloud (Azure and Office Web Apps)” in Bodkin’s Canning Steve Ballmer no Microsoft cure-all article of 10/8/2010 for NetworkWorld’s Software blog:

image Steve Ballmer has been CEO of Microsoft for more than a decade, and it seems he's spent much of that time fighting off speculation that Microsoft should fire him.

Ballmer, a 30-year Microsoft veteran who went to Harvard University with Bill Gates, has survived even as Microsoft faltered in the mobile and consumer markets and watched rival Apple take over Microsoft's former position as the world's most valuable technology company.

Some shareholders have complained that Microsoft's stock has underperformed for years and placed the blame at Ballmer's desk.  

imageEven Microsoft's Board of Directors, in a recent Securities and Exchange Commission filing, referred to "the unsuccessful launch of the Kin phone" and "loss of market share in the company's mobile phone business" when it decided not to award Ballmer the full bonus he was eligible for.

Although you could argue Microsoft's Windows, Exchange and Office businesses are humming along (despite the Vista debacle), Microsoft's mobile struggles in the face of Apple's iPhone and Google's Android platform are among the issues that constantly fuel speculation about Ballmer's future.

"My personal opinion is he is long past due," says Craig Montgomery, who led a grassroots shareholder activism movement last year in an attempt to force changes at Microsoft. "Mr. Ballmer is not the correct leader for Microsoft."

But the biggest question may not be "Should Ballmer be fired?" A more important question may be "who can replace him?"

Despite the so-called "consumerization of IT," in which the worlds of business and consumer technology are slowly merging together, it is still very difficult for any one company to be the dominant technology provider in both the home and corporate markets.

"Ballmer may be facing an impossible task and it may be he's not doing a particularly good job, but it may be that no one could. And maybe even Bill Gates couldn't," says industry analyst Roger Kay, president of Endpoint Technologies Associates.

That being said, Kay says he has come to believe that Microsoft could do better with a different CEO. "The [Microsoft] board should have set a timer on Ballmer's tenure and said 'if we don't start seeing certain metrics then we're going to find a replacement for you,'" Kay says.

Microsoft dominates its strongest categories as much as Apple dominates its strongest areas, but "where Microsoft is getting killed is they're not competitive where Apple is," says analyst Rob Enderle of the Enderle Group. Ballmer has opened himself up to criticism even though Microsoft's Windows business has succeeded against Linux, and Exchange has "knocked Lotus Notes right off the map," Enderle says.

Kay suggests that Microsoft's ambitions are outstripping its actual capabilities. Microsoft built gigantic businesses with Windows and Office, and servers and tools, but creating monopolies in more than one or two businesses may be beyond the reach of even Microsoft, Kay says.

Enderle argues that "there aren't many people on the planet" who could run a company like Microsoft, saying that even Oracle's Larry Ellison and leaders of companies such as IBM don't have to manage the diversity of categories that Ballmer is tasked with. "They are a software and product company that spans consumer to large-scale enterprise products. That's more breadth than IBM has in terms of customers, and exceeds the existing skills of, probably, the leaders of the other major software companies," Enderle says.

imageRead more: 2, 3, Next >. The quote about Azure is on page 3.


Lee Pender reported Ballmer Talks Up Cloud Vision in a 10/7/2010 to his Redmond Channel Partner blog:

image Let's forget for a moment that Microsoft is very likely to embarrass itself with the Windows Phone 7 launch next week and try to focus on something positive. Steve Ballmer this week re-re-re-reiterated Microsoft's commitment to the cloud.

This time, he did it in Sweden, no doubt with pickled herring and outrageously priced alcohol for everybody. (What a beautiful city Stockholm is, though. So clean you could eat off the sidewalk, if you could stand to eat pickled herring.)

He also told a crowd in Germany (just in time for Oktoberfest, eh, Steve -- or does that still actually happen in late September?) that Microsoft is investing billions of dollars in data centers. On top of all that, Microsoft made its first official acquisition of 2010 this week, snapping up a company called AviCode whose products, in part, monitor cloud applications.

imageAlthough he clearly planned well in advance to say these things, Ballmer's timing in talking about the cloud is good, given Goldman Sachs's recent sage advice (eye roll here [see below post]) about how Microsoft should be more involved in cloud technologies. There's no word on whether Ballmer used a one-finger or two-finger salute when referring to Goldman, as the custom is different in the U.S. and Europe. (OK, OK...so he didn't mention Goldman, as far as we've read, and he probably didn't "salute" at all. We kind of figure that he wanted to, though.)

All of this cloud stuff is good stuff, of course, although it remains to be seen how a large swatch of Microsoft partners will fit into Redmond's cloud strategy in a practical sense. In any case, we're much more comfortable with Ballmer talking enterprise technology than we will be when he takes the stage in New York next week.

Is Microsoft on the right path in the cloud? Why or why not? Send your thoughts to lpender@rcpmag.com.

Having spent many months in Sweden (mostly in Skåne, Helsingborg and Trelleborg), summer and winter, I’ve learned to like pickled herring and love Gravadlax.


Lee Pender posted Goldman Sachs Masters the Obvious with Microsoft Rant to his Redmond Channel Partner blog on 10/4/2010:

imageIt's one of those head-slapping moments. Some guru from Goldman Sachs comes out and makes a bunch of suggestions about how Microsoft should run its business, and we at RCPU sit here and slap our heads saying, "Why didn't we think of that?" Except we did...and we weren't the only ones.

Goldman downgraded Microsoft's stock at some point over the weekend, and, in doing so, a Goldman analyst went off on a rant about what Microsoft needs to do to get itself out of the decade-long stock-price slump it's in. As Todd Bishop's always excellent blog reports, Goldman had three suggestions for Redmond. And we quote, from the Goldman report as quoted on Bishop's blog:

image"(1) A materially increased dividend beyond the recent 23% increase, moving Microsoft into the top 20 dividend-paying companies in the S&P 500 in terms of dividend yield. We believe this would open the door to a larger investor base and keep the company more diligent from a spending perspective. (2) A coherent consumer strategy that could involve paring back investments and/or divesting more peripheral assets such as gaming. (3) Market leadership in Cloud. Microsoft has a strong portfolio of enterprise data center assets and could become a leader in Cloud deployments, but the competitive environment remains highly in flux, with Microsoft still not a clear 'winner,' in our view."

We'll toss out the first suggestion as being sensible enough but a little too insider-Wall Street for our purposes. It's the last two that have caused the self-inflicted red marks on our collective RCPU forehead. Basically, No. 2 is talking about Microsoft focusing on core products and either spinning off or paring down its flailing consumer businesses, including the money-losing entertainment division.

What a radical concept! We've only been banging on about that topic here in this newsletter for, oh, four years or so. If you've read RCPU at all over the years, you've noted our warnings to Microsoft that that the company should stop trying to be cool, stop trying to be everything to everybody and start honing in again on operating systems, servers and other long-time moneymakers as well as on cloud technologies (more on that in a minute).

To be fair, Microsoft has taken that tack to some extent—Windows 7 and SharePoint are good examples of enterprise technologies that are both useful and successful (or, at least, Windows 7 will be successful given time). But Redmond remains entrenched in trying to be an entertainment titan, a mobile developer and possibly still some sort of advertising agency. And all of that stuff is either a brain drag or a financial drag on the company -- or both. Microsoft should spin off, wind down or just ditch a lot of its consumer-focused efforts. But haven't we all known this for a while? And by "we," we mean most industry observers, not just RCPU? For heaven's sake, how long has Mini-Microsoft been around, anyway?

The mobile space, in particular, has been tricky for Microsoft -- as evidenced by the company's desperate Android patent lawsuit and its repeated failure to regain mobile momentum. Here again, Goldman states the obvious in observing Microsoft's place in that market. The Financial Post quotes Goldman's Sarah Friar thusly:

"'We believe that top-line momentum and hence investor sentiment on Microsoft's core Windows and Office franchises is unlikely to improve until the company gains a firmer [read: any] foothold in the growing migration to mobile devices – both smartphones and tablets,' said Ms. Friar. 'We don't see this happening this year, as Apple's iPad and iPhone plus Google's Android operating system are well established,' she said."

My goodness! What an eye-opener! We do wonder, actually, why Microsoft absolutely has to have its own mobile operating system. Is there no room for some partnership here with a more established developer? Don't Windows and, in particular, Office, still carry some weight in the marketplace?

It seems as though Microsoft would be better served to scrap plans to rely on Windows Phone 7, a mobile OS that will be obsolete the minute it comes out next week, as its candidate for entry back into the mobile market and instead maybe find a way to get other Microsoft technologies licensed onto somebody else's platform.

Sure, that's not the Microsoft way -- the company would have to (gasp!) actually cede ownership of a market. But Microsoft isn't a serious mobile competitor now, and its path toward becoming one seems quixotic at best. Maybe Microsoft has hacked away at mobile windmills for long enough.

Back to Goldman's revelations, though -- and they only get better. Dig suggestion No. 3. Here it is again: "(3) Market leadership in Cloud. Microsoft has a strong portfolio of enterprise data center assets and could become a leader in Cloud deployments, but the competitive environment remains highly in flux, with Microsoft still not a clear 'winner,' in our view."

What? Microsoft needs to be in the cloud? And nobody has claimed the cloud space yet? Geez, even Microsoft gets this one. Azure and the constant "all-in-for-the-cloud" messaging coming out of Redmond are a pretty good indication that Microsoft wants to position itself as a leader in the cloud and is making positive moves toward doing so. But, hey, Goldman, thanks for the heads up.

As expected, Goldman's rant has led to another nice wave of articles talking about how Microsoft is finished...and basically saying the same thing that Goldman said this week and that we've been saying for years. Again, it's not that we disagree with Ms. Friar or with Goldman Sachs. We actually agree, for the most part -- and maybe these words coming from Goldman will actually get somebody's attention in Redmond.

Please forgive us, though, if we say that we've heard all of this before -- because we've written it, over and over again, in this newsletter over the last few years. Then again, on second thought, Goldman might have some credibility here that we don't have.

After all, the firm is warning Microsoft about potential failures -- and nobody knows failure like Goldman Sachs, a company that likely wouldn't exist today had the U.S. government not covered it with a big TARP. So, keep preaching, Goldman. We'll be sitting behind you in the choir, which apparently either isn't singing loudly enough or is reaching the wrong members of the congregation.

What's your take on Goldman's suggestions for Microsoft? What should Microsoft do to get itself back on track? Send your thoughts to lpender@rcpmag.com.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

Chris Czarnecki asked How Dedicated Is Your Private Cloud ? in a 10/8/2010 post to the Learning Tree blog:

image On a recent consultancy assignment I was advising an organisation on their cloud computing strategy. The team were convinced that they required a private cloud and rightly so – their deployment scenario demands a private cloud for security reasons. Key questions to be decided were then should the cloud be on or off-premise. but this is not the only question – especially of an off-premise private cloud is required. The reason being is the term ‘private cloud’ can mean different things depending on the cloud provider.

Consider the following scenario – you provision an off-premise private cloud from a provider. Your resources will be probably be hosted in a separate virtual LAN configuration – the key here being virtual LAN. Your resources will still be on machines that are potentially shared by others. This is the private cloud provision provided by most cloud organisations. An interesting offering that is different is from Stratogen. Amongst their cloud offerings they have a dedicated private cloud. How does this differ from their private cloud ? The key is the word dedicated. Here your cloud is hosted on a custom built infrastructure, with fully dedicated resources. Of course there are cost implications, but the security is stronger as well as performance less affected by the activity of others. To further confuse the options, it is possible to have a hybrid private cloud – a mixture of on-premise and off-premise private clouds, which could be dedicated or VLAN – based.

So what started off as looking like a simple decision for my consulting client, i.e private cloud, actually required a lot more consideration based on how dedicated the private cloud should be, as well as hybrid or not. Gaining the knowledge of this vital area of cloud computing is not easy. Most vendors use the general term ‘private cloud’ without fully explaining the details. This makes it very hard for organisations to make sense of what they are actually buying into. It is this kind of knowledge that you gain on Learning Tree’s Cloud Computing course. If you are interested, why don’t you come along ? I am next teaching this course in London shortly – October 27-29.  If you are thinking of coming along, register quickly its almost full!  Hopefully I will see you there.


Bruce Hoard posted AVIcode Purchase Dives Deep into Monitoring as his “Hoard” Facts column of 10/6/2010 for 1105 Media’s Virtualization Review site:

And to think that Redmond makes fun of VMware for acquiring their expertise via the M&A process.

image What's wrong with snapping up otherwise hard-to-find expertise by buying an up-and-coming company from time to time? Beats trying to develop everything internally. Which is no doubt what Microsoft was thinking when they bought Baltimore-based AVIcode, a previously private firm known for its .NET application monitoring capabilities.

image

Brad Anderson, Microsoft corporate VP, Management and Services Division, writes in his blog that Redmond's move to SaaS and PaaS (via Windows Azure) has challenged the company to extend its monitoring capabilities to do three key tasks: "Enable an understanding of how end users experience an application's performance and quality, trace the performance of critical business transactions, and gain insight into the relationship of the hardware and the software components of a distributed application or service."

image Enter AVIcode. Anderson says AVIcode solutions include all those monitoring metrics, and they are all integrated with System Center Operations Manager. In fact, he adds, Microsoft has been deploying AVIcode with Operations Manager in its far-flung data centers on services such as XBOX for many years.

"Bringing together the capabilities of Operations Manager with the enhancements from AVIcode enables organizations to truly get the 360-degree view of their service--independent of where the service is hosted, whether a datacenter/cloud, in a partner's hosted datacenter/cloud, or from a public cloud solution such as Windows Azure," Anderson says.

Everyone else is doing cloud-related M&A, why not Microsoft?


<Return to section navigation list> 

Cloud Security and Governance

Todd Hoff quotes Robert Haas in a 4 Scalability Themes from Surgecon post of 8/10/2010 to the High Scalability blog:

High Scalability

Robert Haas in his SURGE Recap of the Surge conference, reflected a bit, and came up with an interesting checklist of general themes from what he was seeing. I'm directly quoting his post, so please see the post for a full discussion. He uses this framework to think about the larger picture and where PostgreSQL stands in its progression.

  1. Make use of the academic literature. Inventing your own way to do something is fine, but at least consider the possibility that someone smarter than you has thought about this problem before.
  2. Failures are inevitable, so plan for themTry to minimize the possibility of cascading failures, and plan in advance how you can operate in degraded mode if disaster (or the Slashdot effect) strikes.
  3. Disk technology matters. Drive firmware bugs are common and nightmarish, and you can expect very limited help from the manufacturer, especially if the drive is billed as consumer-grade rather than enterprise-grade. SSDs can save you a lot of money, both because a given number of dollars buys more IOs-per-second, and because electricity isn't free.
  4. Large data sets require horizontal scalability.  In the era of 1TB drives, "large" doesn't mean quite what it used to,  but even though the amount of data you can manage with one machine is growing all the time, the amount of data people want to manage is growing even faster.


<Return to section navigation list> 

Cloud Computing Events

jmacc posted Visual Studio Live! Las Vegas 2011 Call for Presentations on 10/7/2010:

Submit Proposals at: http://cfp.vslive.com/

Deadline for submission: Friday, October 22, 2010.

imageVisual Studio Live! invites you to submit a proposal to present at the Visual Studio Live! Las Vegas 2011 event on April 11 – 15 at the Planet Hollywood Resort & Casino. For over 18 years Visual Studio Live! (VSLive!) events have provided expert solutions for .NET developers.

Attendees come to Visual Studio Live! to acquire practical, pragmatic and immediately applicable knowledge. They come for inspiration, to be shown a vision of a better future through the use of concepts, techniques, patterns and technology that they can apply in their organizations. And they come for a glimpse of the future of technology, the cool stuff that Microsoft (and others) are building that may not be useful today, but will factor into their strategic planning process.

We welcome presentation proposals that are suited to our educational foci, which include (but are not limited to):

Silverlight/WPF                                  

  • Silverlight, WPF, XAML, Visual Studio 2010’s XAML Designer, Expression Blend, WCF RIA Services

Web

Visual Studio 2010/.NET 4

  • Visual Studio features, TFS, new language features, parallel programming extensions

SharePoint

  • SharePoint 2010, Office 2010, Visual Studio Tools for Office
  • SharePoint Access Services and Access Web Databases

Cloud Computing

  • Includes cloud, server and messaging technologies
  • Windows Azure, Amazon Web Services, AppFabric (for Windows Server and Windows Azure), REST services programming, Project “Dallas”, WCF, Windows Workflow

Data Management

  • SQL Server, Microsoft BI and PowerPivot, Entity Framework, WCF Data Services, oData, Sync Services
  • SQL Azure, SQL Server “Denali” futures

HTML 5

  • Core HTML 5: new concepts, new tags, and enhancements to old ones
  • Extensions to ECMAScript/JavaScript, CSS 3
  • Rich media in HTML 5 with the audio and video tags
  • 2D drawing/animation the canvas tag and SVG
  • offline apps, database storage, and data entry enhancements

"Simplification" tools

  • Including ASP.NET Web Pages (Razor)
  • ASP.NET MVC w/ Razor as a view engine
  • WebMatrix, Web Platform Installer and other Web quick starts
  • LightSwitch

Windows Phone 7

  • SL for WP7
  • Expression Blend for WP7
  • XNA for WP7
  • Managing WP7 push notifications
  • Developing Web apps optimized for WP7
  • Monetizing your app on the WP7 marketplace

Developing for iOS devices using MonoTouch

To ensure that you have a positive conference experience, please read the entire Call for Presentations guidelines before submitting your proposals.

If you have any questions regarding the conference or CFP submission process, please contact Suzanne Young, Event Manager at SYoung@1105Media.com or (240) 479-1479.

We look forward to seeing you in Las Vegas in April, 2011.

The Visual Studio Live! Event Team: www.vslive.com


Registration opened for Cloud Connect 2011 (#ccevent) on 10/8/2010 (thanks to @jamesurquhart for the heads up). Cloud Connect 2011 will take place at the Santa Clara, CA Convention Center on 3/7 through 3/10/2010:

Conference

See the latest cloud technologies and learn from thought leaders in Cloud Connect’s comprehensive conference. Topics include:

  • Cloud Futures and Roadmaps
  • Cloud Risks, Challenges and Governance
  • New Infrastructure
  • Developing for the Cloud
  • Dealing with Big Data
  • Migration Strategies
  • ROI, Cost and Economics
  • Standards, Governments and Industry
  • Case Studies and Lessons Learned
Workshops

Dive deep into key cloud computing topics on Monday, March 7 including:

  • How to Operate Securely Using Cloud Computing
  • Interopability and Portability Across Cloud Ecosystems
  • Cloudy Operations
  • Data Storage in the Cloud
  • Building Applications on Amazon Web Services
Expo Pavilion

Meet top vendors of cloud technologies to find the right products and services for your organization.

Launch Pad

Competition that lets companies present innovative collaboration applications (either in development and about to launch, or recently launched) and the winner chosen by live text-to-vote at Enterprise 2.0 Santa Clara.

Call for Papers

The Call for Papers is now closed. Thank you to all who submitted.

Event At A Glance

image


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Alex Williams reported about the rise of the Lucene/Soir search technology in his The RSS Connection: New Search, Big Data and the Web App Movement post of 10/8/2010:

lucene.pngOne thing the recession has done is fuel innovation by imposing financial constraints. The constraints have led to some dramatic market changes, in particular related to the rise of Lucene/Solr, the open-source search technology.

As a result, it feels like we are the beginning of something. It's like we are witnessing the emergence of a new search. I tweeted that today about Lucene Revolution, the conference I am attending here in Boston. It is reminiscent of 2003, when blogs gave rise to RSS, a format designed to help us make sense of the online content that had started to flow like never before.

image As RSS helped us make sense of information in 2003 so is Lucene/Solr helping make sense of data today. Big data is here to stay. It is increasingly fueling Web apps and is leading to new innovations in search, primarily in the open-source market. You see it here at Lucene Revolution. Twitter, LinkedIn, Salesforce.com and a host of others are discussing how the Web apps they have developed need Hadoop and Lucene/Solr to function properly for the apps to do what they want them to do.

RSS in many ways served as the precursor to a new breed of syndication technologies and the modern Web app, which gave rise to the activity stream. It's a river of news, a term coined by Dave Winer, one of the Web's pioneering developers . Twitter came out of this wave, be it indirectly, through the creations by Evan Williams and his team. Williams started Pyra Labs, which gave rise to Blogger. After selling to Google, Williams became captivated by the next incarnation of RSS. That was podcasting, which never really took off in a big way but perhaps more importantly served as a catalyst for a new wave of innovation in media delivery. Odeo flopped but Twitter rose from its ashes.

Twitter represented a new way to get news. It harnessed APIs like no other service had before. Developers embraced it. It marked a new wave, the app movement, which in turn has fueled the fast rise of social technologies. Text, pictures, videos are at the heart of these apps. The more we create, the more reason we give to developers to build new apps.

In the midst of this, the financial markets collapsed and with it a spiraling economy. Suddenly, it made even more sense to develop your own apps.

And here we are today. Budgets are still thin but the pipes are fat with content. So much so that we need new tools to make sense of it. What is coming from that is the start of something pretty significant. It's the new search and it is helping us make sense of things.

Sometimes it feels like 2003.


Alex Popescu’s RavenDB: New Features post of 10/8/2010 to his myNoSQL blog offers details about Ayende Rahien’s NoSQL database:

image RavenDB, the .NET document database, seem to be very active these days. While still pretty young, RavenDB seems to be adding a lot of features lately to catch up with the other, better known, document databases like CouchDB or MongoDB. [Link added.]

The latest added (or just documented[1]) features include:

  • Replication Bundle: master slave, master multi slaves, and multi master
  • Versioning Bundle – seamless audit trails for document changes.
  • Spatial queries – for geo location searches.
  • Authorization Bundle – per document authorization.
  • Document metadata – manipulating the document metadata at the client side.
  • Patching documents – avoid having to send the entire document on the wire, send just the changes.
  • Custom serialization – control how the client API serializes your documents.

While all these sound like good additions to RavenDB, I must confess that there are a couple of questions that I’d like answered:

  • how battle tested is RavenDB? are there any production deployments?
  • how many developers are behind RavenDB?
  • what support can you get for RavenDB?

Many of these questions do apply to other NoSQL databases too though.

image I enjoyed Ayende’s remark: “I am rapidly coming to the realization that if it isn’t documented, it doesn’t exist”.  ()


Alex Williams asked Weekly Poll: Salesforce.com to Adopt REST APIs - What is the Significance? in a 10/8/2010 post to the ReadWriteCloud:

developerforce_logo.pngFor the first time in its history, Salesforce.com will launch a REST API. Designed for the Force.com platform, the new API is a departure for Salesforce.com, which has historically provided SOAP as a means for integrating the SaaS provider's technology with third-party applications.

image The news, which we first saw on Programmable Web, has led us on a journey over the past two days to seek some answers about what the adoption says about Salesforce.com and its 10-year-old architecture. It has also served to define what appears as a turning point for REST as adoption has now spread across a spectrum of providers. Most of all, it shows once again how pioneering developers have again proven the acceptance of easy-to-use, Web-based services.

Salesforce.com to Adopt REST APIs - What is the Significance?online survey

According to Programmable Web, The Salesforce.com API will enable "simple HTTP and JSON as a possible output format, to make integrating with Force.com fast and easy." Salesforce.com posted his week about the integration with Oauth 2:

As Programmable Web points out, REST APIs are fast outpacing SOAP. The name of the game is ease of use and fast integration. REST is winning hands down in this respect, taking away from SOAP, the historical foundation of the Salesforce.com API.

rest-vs-soap.jpg''

Salesforce.com, SOAP and its Historical Context

Paul Greenberg is a long-time observer of the enterprise and the CRM space. He's the author of CRM at the Speed of Light and a longtime watcher of Salesforce.com.

Until now, Salesforce.com has had no need to adopt REST. It has worked effectively as a way to work with third party applications. SOAP is getting displaced but it is still being used in the enterprise. It is more deeply rooted into the history of the Web - a reason why Saleforce.com has depended on it. The technology was developed in the same time period that Salesforce.com made its name in the market.

During that time, enterprise companies created their own flavor of Web services, designed with business rules built in. Salesforce.com had a different approach due to its multi-tenant environment. SOAP worked just fine as a way to interface with other applications.

"There was no fundamental reason why they had to go a new architecture," Greenberg said. "To this day it works. It is the most commonly accepted standard that is out there. It has value to Saleforce. It is easily adoptable.

"That said - in last two years - REST has been gaining ground quickly. Consequently, even the best enterprise SOA companies are developing REST APIs."

But it also shows that Salesforce.com is Java-based and arguably Java-biased.

Critics say you can see this in the thinking behind the VMforce offering as well, where Java is the language for the PaaS offering from Salesforce and VMware. This is a small base of traditional programmers who work in medium to large enterprises.

It is certainly an indicator of their legacy nature and legacy thinking that it has taken them so long to launch a REST API. REST is not just language-neutral, but programmer-neutral - regardless of your level of skill or education, any programmer can use REST APIs.


<Return to section navigation list> 

0 comments: