Monday, October 25, 2010

Windows Azure and Cloud Computing Posts for 10/25/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31133  
• Updated 10/26/2010 with articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.


Azure Blob, Drive, Table and Queue Services

Steve Marx (@smarx) announced in a 10/25/2010 tweet:

imageI built my own TwitPic with a Windows Phone 7 client. Let's see if it works

Here’s an early diagram:

imageStay tuned for further developments.


Ranier Stropek posted a tutorial about Windows Azure Storage to the Time Cockpit blog on 10/25/2010:

imageThe Windows Azure platform offers different mechanisms to store data permanently. In this article I would like to introduce the storage types of Windows Azure and demonstrate their use by showing an example application.

Storage Types In Windows Azure

If you want to store data in Windows Azure you can choose from four different data stores:

  1. Queues
  2. SQL Azure
  3. Blob Storage
  4. Table Storage
Windows Azure Queues

The first one is easy to explain. I am sure that every developer is used to the concept of FiFo queues. Azure queues can be used to communicate between different applications or application roles. Additionally Azure queues offer some quite unique features that are extremely handy whenever you use them to hand off work from an Azure web role to an Azure worker role. I want to point your attention especially to the following two ones:

  • Auto-reappearance of messages
    If a receiver takes out a message from the queue and crashes when handling it, it is likely that the receiver will not be able to reschedule the work before dying. To handle such situations Azure queues let you specify a time span when getting an element out of the queue. If you do not delete the received message within that time span Azure will automatically add the message to the queue again so that another instance can pick it up.
  • Dequeue counter
    The dequeue count is closely related to the previously mentioned auto-reappearance feature. It can help detecting "poisoned" messages. Imagine an invalid message that kills the process that has received it. Because of auto-reappearance another instance will pick up the message - and will also be killed. After some time all your workers will be busy dying and restarting. The dequeue counter tells you how often the message has already been taken out of the queue. If it exceeds a certain number you could remove the message without further processing (maybe logging would be a good idea in such a situation).

Before we move to the next type of storage mechanism in Azure let me give you some tips & tricks concerning queues:

  • Azure queues have not been built to transport large messages (message size must not be larger than 8KB). Therefore you should not include the messages' payload in the queue messages. Store the payload in any of the other storages (see below) and use the queue to pass a reference.
  • Write application that are tolorant to system failures and therefore make your message processing idempotent.
  • Do not rely on a certain message delivery order.
  • If you need really high throughput package multiple logical messages (e.g. tasks) into a single physical Azure queue message or use multiple queues in parallel.
  • Add poisoned message handling (see description above)
  • If you use your Azure queues to pass work from your web roles to your worker roles write some monitoring code that checks the queue length. If it gets to long you could implement a mechanism to automatically start new worker instances. Similarly you can shut down instances if your queue remains emtpy or short for a longer period of time.
SQL Azure

imageYes, SQL Azure is a SQL Server in the cloud. No, SQL Azure is not just another SQL Server in the cloud. With inventing SQL Azure Microsoft did much more than buying some server, put Hyper-V on them and let the virtual machines run SQL Server 2008 R2. It is correct that behind the scenes SQL Server is doing the heavy lifting deep down in the dark corners of Azure's data centers. However, a lot of things are happening before you get access to your server.

The first important thing to note is that SQL Azure comes with a firewall/load balancer that you can configure e.g. through Azure's management portal. You can configure which IP addresses should be able to establish a connection to your SQL Azure instance.

If you have passed the first firewall you get connected with SQL Azure's Gateway Layer. I will not go into all details about the gateways because this is not a SQL Azure deep dive. The gateway layer is on the one hand a proxy (find the SQL Server nodes that are dedicated to your SQL Azure account) and on the other hand a stateful firewall. "Stateful firewall" means that the gateway understands TDS (Tabular Data Stream, SQL Server native communication language) and checks TDS packages before they hit the underlying SQL Servers. Only if the gateway layer finds everything ok with the TDS packages (e.g. right order, user and password ok, encrypted, etc.) your requests are handed over the the SQL Servers.

The beauty of SQL Azure is that you as a developer can work with SQL Azure just like you work with your SQL Server that stands in your own data center. SQL Azure supports the majority of programming features that you you are used to. You can access it using ADO.NET, Entity Framework or any other data access technology that you like. However, there are some limitations to SQL Azure because of security and scalability reasons. Please check MSDN for details about the restrictions.

Again some tips & tricks that could help when you start working with SQL Azure:

  • Use SQL Server Management Studio 2008 R2 in order to be able to manage our SQL Azure instances in your Object Explorer.
  • Never forget that SQL Azure always is a database cluster behind the scenes (you get three nodes for every database). Therefore you have to follow all Microsoft guidelines for working with database clusters (e.g. implement auto-reconnect in case of failures, auto-retry, etc.; check MSDN for details).
  • Don't forget to estimate costs for SQL Azure before you start to use it. SQL Azure can be extremely cost-efficient for your applications. There are situations (especially if you have very large databases or a lot of very small ones) in which SQL Azure can get expensive.
Windows Azure Blob Storage

Windows Azure has been built to scale. Therefore typical Azure applications consist of many instances (e.g. web farm, farm of worker machines, etc.). As a consequence there is a need for a kind of file system that can be shared by all computers participating in a certain system (clients and servers!). Azure Blob Storage is the solution for that.

Natively Azure Blob Storage speaks a REST-based protocol. If you want to read or write data from and to blobs you have to send http requests. Don't worry, you do not have to deal with all the nasty REST details. The Windows Azure SDK hides them from you.

Similarly to SQL Azure I will not go into all details of Azure Blob Storage here. You will see how to access blobs in the example shown below. Let me just give you the following tips & tricks about what you can do with Azure Blobs:

  • Azure Blob Storage has been built to store massive amounts of data. Don't be afraid of storing terabytes in your blob store if you need to. Even a single blob can hold up to 1TB (page blobs).
  • Azure differs between block blobs (streaming + commit-based writes) and page blobs (random read/write). Maybe I should write a blog post about the differences... Until then please check MSDN for details.
  • Blobs are organized into containers. All the blobs in a container can be structured in a kind of directory system similar to the directory system that you know from your on-premise disk storage. You can specify access permissions on container and blob level.
  • You can programatically ask for a shared access signature (i.e. signed URL) for any blob in your Azure Blob store. With this URL a user can direcly access the blob's content (if necessary you can restrict the time until when the URL will be valid). Therefore you can e.g. generate a confirmation document, put it into blob store and send the user a direct link to it without having to write a single line of code for providing it's content (btw - this means also less load on your web roles).
Windows Azure Table Storage

Azure Table Storage is not your father's database. It is a No SQL data store. Just like with Azure Blob Storage you have to use REST to access Azure tables (if you use the Windows Azure SDK you use WCF Data Services to access Table Storage).

Every row in an Azure table consists of the following parts:

  1. Partition Key
    The partition key is similar to the table name in a RDBMS like SQL Server. However, every record can consist of a different set of properties even if the records have the same partition key (i.e. no fixed schema, just storing key/value pairs).
  2. Row Key
    The row key identifies a single row inside a partition. Partition key + row key have to be unique throughout your whole Table Storage service.
  3. Timestamp
    Used to implement optimistic locking.

At the time of writing this article Azure Table Storage supports the following data types: String, binary, bool, DateTime, GUID, int, int64 and double.

So when to use what - SQL Azure or Azure Tables?? Here are some guidelines that could help you to choose what's right for your application:

  • In SQL Azure storage is quite expensive while transactions are free. In Azure tables storage is very cheap but you have to pay for every single transaction. So if you have small data that is frequently accessed use SQL Azure, if you have large amounts of data that has to be stored but that is seldom access used Azure tables. If you find both scenarios in your application you could combine both storage technologies (this is what we do in our program time cockpit.
  • At the time of writing SQL Azure does only offer a single (rather small) machine size for databases. Because of this SQL Azure does not really scale. If you need more performance you have to build your own scaling mechanisms (e.g. distribute data accross multiple SQL Azure databases using for instance SyncFramework). This is different for Azure tables. They scale very well. Azure will store different partitions (remember the partition key I mentioned before) on different servers in case of heavy load. This is done automatically! If you need and want automatic scaling you should prefer Azure tables over SQL Azure.
  • Azure Table Storage is not good when it comes to complex queries. If you need and want all the great features that T-SQL offers you, you should stick to SQL Azure instead of Azure tables.
  • The amount of data you can store in SQL Azure is limited whereas Azure tables have been built to store terabytes of data. …

Ranier continues with detailed “Azure Storage In Action” source code examples.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

The Professional Developers Conference 2010 team delivered on 10/26/2010 a complete OData feed of PDC10’s schedule at http://odata.microsoftpdc.com/ODataSchedule.svc/. Here’s the root:

image

Here’s the session entry for the first, “Building High Performance Web Applications” session:

image

The format is similar to that used by Falafel Software for their EventBoard app, which I described in my Windows Azure and SQL Azure Synergism Emphasized at Windows Phone 7 Developer Launch in Mt. View post of 10/14/2010.


Thanks to Jamie Thompson (@jamiet) for the heads up for the preceding in his PDC schedule published as OData, but where's the iCalendar feed? post of the same date:

Chris Sells announced on twitter earlier today that the schedule for the upcoming Professional Developers' Conference (PDC) has been published as an OData feed at: http://odata.microsoftpdc.com/ODataSchedule.svc

Whoop-de-doo! Now we can, get this, view the PDC schedule as raw XML rather than on a web page or in Outlook or on our phone, how cool is THAT?  (conveying sarcasm in the written word is never easy but hopefully I've managed it here!)

Seriously, I admire Microsoft's commitment to OData, both in their Creative Commons licensing of it and support of it in a myriad of products but advocating its use for things that it patently should not be used for is verging on irresponsible and using OData to publish schedule information is a classic example.

A standard format for publishing schedule information over the web already exists, its called iCalendar (RFC5545). The beauty of iCalendar is that it is supported today in many tools (e.g. Outlook, Google Calendar, Hotmail Calendar, Apple iCal) so I can subscribe to an iCalendar feed and see that schedule information alongside, and intertwined with, my personal calendar and any other calendars that I happen to subscribe to. Moreover the beauty of subscribing versus importing is that any changes to the schedule will automatically get propogated to me. Can any of that be achieved with an OData feed? No!

On the off-chance that anyone in the PDC team is reading this I implore you, please, publish the schedule in a format that makes it useful. OData is not that format.


As an aside, I am an avid proponent of iCalendar and have a strong belief that adoption of it both in our work and home lives could have significantly positive repercussions for all of us. With that in mind I actively canvas people to publish their data in iCalendar format and also contribute to Jon Udell's Elmcity project which you can read more about at Elmcity Project FAQ. I encourage you to contribute.


As I mentioned in my Brendan Forster (@shiftkey), Aaron Powell (@slace) and Tatham Oddie (@tathamoddie) developed and demoed The Open Conference Protocol with a Windows Phone 7 app article in the “SQL Azure Database, Codename “Dallas” and OData” section of my Windows Azure and Cloud Computing Posts for 10/15/2010+ post (scroll down):

The Open Conference Protocol project[, which uses existing hCalendar, hCard and Open Graph standards] appears to compete with Falafel Software’s OData-based EventBoard Windows Phone 7 app described in my Windows Azure and SQL Azure Synergism Emphasized at Windows Phone 7 Developer Launch in Mt. View post of 10/14/2010. I’m sure EventBoard took longer than 30 man-hours to develop; according to John Watters, it took two nights just to port the data source to Windows Azure. OData implementations have the advantage that you can query the data. Hopefully, the PDC10 team will provide an OData feed of session info eventually.

PDC10’s OData feed must have been available to some earlier than today, because Craig Dunn announced his Conf for PDC10 on Windows Phone 7 app on 10/26/2010:

A first-cut of the PDC10 schedule can now be downloaded for the Windows Phone 7 version of Conf - now available on Marketplace (search for Conf/look for this tile).

To download new conference data in Conf

  • Start on the first panel of the Panorama
  • Scroll down to other conferences... and touch Download more...
  • When the list downloads from the server, touch PDC10
  • PDC10 should appear in the list - if not, switch between the conferences until it does :-s
This is what the app looks like with PDC10 data loaded:

The iPhone version of Conf is currently awaiting AppStore approval - fingers crossed for Thursday!

It appears to me that OData is gaining traction in the event reporting arena.


Wayne Walter Berry posted Gaining Performance Insight into SQL Azure to the TechNet Wiki on 10/25/2010:

image Understanding query performance in SQL Azure can be accomplished by utilizing SQL Server Management Studio or the SET STATISTICS Transact-SQL commands. Since SQL Server Profiler isn’t currently supported with SQL Azure, this article will discuss some alternatives the provide database administrators insight into exactly what Transact-SQL statements are being submitted to the server, and how the server accesses the database to return result sets.

SQL Server Management Studio

imageUtilizing SQL Server Management Studio you can view the Actual Execution Plan on a query. This gives insight into the indexes that SQL Azure is using to query the data, the number of rows returned at each step, and which steps is taking the longest.

Here is how to get started:

  1. Open SQL Server Management Studio 2008 R2; this version easily connects to SQL Azure.
  2. Open a New Query Window.
  3. Copy/Paste Your Query into the New Query Window.
  4. Click on the toolbar button to enable the Actual Execution Plan.


    Or, choose Include Actual Execution Plan from the menu bar.


  5. Once you have included your plan, run the query. This will give you another results tab that looks like this:


Reading an execution plan is the same in SQL Server 2008 R2 as it is in SQL Azure, and how to read them is beyond the scope of this blog post, to find out more about Execution Plans, read: Reading the Graphical Execution Plan Output. One of the things I use execution plans for is to develop covered indexes to improve the performance of the query. For more information about covered index read this blog post.

USING “SET STATISTICS”

SET STATISTICS is a Transact SQL command you can run in the query window of SQL Server Management Studio to get back statistics about your queries execution. There are a couple variants on this command, one of which is SET STATISTICS TIME ON. The TIME command returns the parse, compile and execution times for your query.

Here is an example of the Transact SQL that turns on the statistic times:

SET STATISTICS TIME ON

SELECT *

FROM SalesLT.Customer

INNER JOIN SalesLT.SalesOrderHeader ON

SalesOrderHeader.CustomerId = Customer.CustomerId

I executed the example on the Adventure Works database loaded into SQL Azure, and got these results:


SET STATISTICS will give you some “stop watch” metrics about your queries, as you optimized them you can rerun them with SET STATISTICS TIME ON to determine if they are getting faster.

Another flavor of SET STATISTICS is SET STATISTICS IO ON, this variant will give you information about the IO performance of the query in SQL Azure. My example query looks like this:

SET STATISTICS IO ON

SELECT *

FROM SalesLT.Customer

INNER JOIN SalesLT.SalesOrderHeader ON

SalesOrderHeader.CustomerId = Customer.CustomerId

And the output looks like this:

We have covered I/O performance in SQL Azure in an earlier this blog post, so I will not go into detail again here.

Observing running queries

With SQL Server you can utilize SQL Profiler to show all the queries running in real-time. In SQL Azure, you can still get access to the running queries and their execution count, via the Procedure cache, with a Transact-SQL query similar to this:

SELECT q.text, s.execution_count

FROM sys.dm_exec_query_stats as s

cross apply sys.dm_exec_sql_text(plan_handle) AS q

ORDER BY s.execution_count DESC

For more information about how the procedure cache works in SQL Azure, see this blog post.

Glad to see Wayne is posting again!


Beth Massi wrote Add Some Spark to Your OData: Creating and Consuming Data Services with Visual Studio and Excel 2010 for the Sep/Oct 1020 issue of Code Magazine:

The Open Data Protocol (OData) is an open REST-ful protocol for exposing and consuming data on the web. Also known as Astoria, ADO.NET Data Services, now officially called WCF Data Services in the .NET Framework. There are also SDKs available for other platforms like JavaScript and PHP. Visit the OData site at www.odata.org.

imageWith the release of .NET Framework 3.5 Service Pack 1, .NET developers could easily create and expose data models on the web via REST using this protocol. The simplicity of the service, along with the ease of developing it, make it very attractive for CRUD-style data-based applications to use as a service layer to their data. Now with .NET Framework 4 there are new enhancements to data services, and as the technology matures more and more data providers are popping up all over the web. Codename “Dallas” is an Azure cloud-based service that allows you to subscribe to OData feeds from a variety of sources like NASA, Associated Press and the UN. You can consume these feeds directly in your own applications or you can use PowerPivot, an Excel Add-In, to analyze the data easily. Install it at www.powerpivot.com.

imageAs .NET developers working with data every day, the OData protocol and WCF data services in the .NET Framework can open doors to the data silos that exist not only in the enterprise but across the web. Exposing your data as a service in an open, easy, secure way provides information workers access to Line-of-Business data, helping them make quick and accurate business decisions. As developers, we can provide users with better client applications by integrating data that was never available to us before or was clumsy or hard to access across networks.

In this article I’ll show you how to create a WCF data service with Visual Studio 2010, consume its OData feed in Excel using PowerPivot, and analyze the data using a new Excel 2010 feature called sparklines. I’ll also show you how you can write your own Excel add-in to consume and analyze OData sources from your Line-of-Business systems like SQL Server and SharePoint.

Creating a Data Service Using Visual Studio 2010

Let’s quickly create a data service using Visual Studio 2010 that exposes the AdventureWorksDW data warehouse. You can download the AdventureWorks family of databases here: http://sqlserversamples.codeplex.com/. Create a new Project in Visual Studio 2010 and select the Web node. Then choose ASP.NET Empty Web Application as shown in Figure 1. If you don’t see it, make sure your target is set to .NET Framework 4. This is a new handy project template to use in VS2010 especially if you’re creating data services.

Click for a larger version of this image.

Figure 1: Use the new Empty Web Application project template in Visual Studio 2010 to set up a web host for your WCF data service.

Click OK and the project is created. It will only contain a web.config. Next add your data model. I’m going to use the Entity Framework so go to Project -> Add New Item, select the Data node and then choose ADO.NET Entity Data Model. Click Add and then you can create your data model. In this case I generated it from the AdventureWorksDW database and accepted the defaults in the Entity Model Wizard. In Visual Studio 2010 the Entity Model Wizard by default will include the foreign key columns in the model. You’ll want to expose these so that you can set up relationships easier in Excel.

Next, add the WCF Data Service (formerly known as ADO.NET Data Service in Visual Studio 2008) as shown in Figure 2. Project -> Add New Item, select the Web node and then scroll down and choose WCF Data Service. This item template is renamed for both .NET 3.5 and 4 Framework targets so keep that in mind when trying to find it.

Click for a larger version of this image.

Figure 2: Select the WCF Data Service template in Visual Studio 2010 to quickly generate your OData service.

Now you can set up your entity access. For this example I’ll allow read access to all my entities in the model:

Public Class AdventureWorksService
Inherits DataService(
Of AdventureWorksDWEntities)
' This method is called only once to 
' initialize service-wide policies.
Public Shared Sub InitializeService(
ByVal config As DataServiceConfiguration)
' TODO: set rules to indicate which 
'entity sets and service operations 
' are visible, updatable, etc.
    config.SetEntitySetAccessRule("*", 
          EntitySetRights.AllRead)
    config.DataServiceBehavior.
           MaxProtocolVersion = 
           DataServiceProtocolVersion.V2
End Sub
End Class

You could add read/write access to implement different security on the data in the model or even add additional service operations depending on your scenario, but this is basically all there is to it on the development side of the data service. Depending on your environment this can be a great way to expose data to users because it is accessible anywhere on the web (i.e., your intranet) and doesn’t require separate database security setup. This is because users aren’t connecting directly to the database, they are connecting via the service. Using a data service also allows you to choose only the data you want to expose via your model and/or write additional operations, query filters, and business rules. For more detailed information on implementing WCF Data Services, please see the MSDN library.

You could deploy this to a web server or the cloud to host for real or you can keep it here and test consuming it locally for now. Let’s see how you can point PowerPivot to this service and analyze the data a bit.

Read more: Article Pages 2, 3 - Next Page: 'Using PowerPivot to Analyze OData Feeds' >>


The San Francisco SQL Server Users Group will feature a Consuming Odata Services for Business Applications presentation by Beth Massi on 11/10/2010 6:30 PM at the Microsoft San Francisco office, 835 Market Street Suite 700, San Francisco, CA 94103:

Speaker: Beth Massi of Microsoft

The Open Data Protocol (OData) is a REST-ful protocol for exposing and consuming data on the web and is becoming the new standard for data-based services.

In this session you will learn how to easily create these services using WCF Data Services in Visual Studio 2010 and will gain a firm understanding of how they work as well as what new features are available in .NET 4 Framework.

You’ll also see how to consume these services and connect them to other public data sources in the cloud to create powerful BI data analysis in Excel 2010 using the PowerPivot add-in.

Finally, we'll build our own Office add-ins that consume OData services exposed by SharePoint 2010.

Speaker: Beth Massi is a Senior Program Manager on the Visual Studio BizApps team at Microsoft and a community champion for business application developers. She has over 15 years of industry experience building business applications and is a frequent speaker at various software development events. You can find Beth on a variety of developer sites including MSDN Developer Centers, Channel 9, and her blog www.BethMassi.com

Follow her on Twitter @BethMassi

Sounds to me as if Beth’s presentation will be based on her Code Magazine article (above).


1989Poster published .NET Data Access Essential [Training] with links to download the entire eight parts on 10/24/2010:

imageThe Microsoft .NET Framework is a robust development platform with an enriched ecosystem of tools, components and features enabling developers to enhance their skill sets and create compelling solutions. Learn about the flexibility that this Framework provides for accessing data in your applications.ADO.NET is a set of computer software components that programmers can use to access data and data services. It is a part of the base class library that is included with the Microsoft .NET Framework.

image1. Introduction to LINQ
2. A Closer Look at LINQ to SQL
3. Intro to WCF Data Services & OData
4. Getting Started with ADO.NET Entity Framework
5. Deeper Look at ADO.NET Entity Framework
6. Azure Data Storage Options


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Clint Warriner of the Microsoft CRM Team announced Windows Azure Web Role Hosted Service + Microsoft Dynamics CRM Online Impersonation on 10/21/2010 (missed when posted):

image We have had a lot of demand for some direction on how to impersonate a Microsoft Dynamics CRM Online user from a Windows Azure hosted service (Web Role). I have completed a walkthrough that will guide you through this process using the default Cloud Web Role project in VS 2008/2010.

image

This walkthrough uses Windows Identity Foundation instead of RPS due to some limitations I had with RPS. You can find the documentation and sample code on http://code.msdn.microsoft.com/crmonlineforazure.


Clemens Vasters (@clemensv) published Windows Azure AppFabric Datacenter IP ranges on 10/25/2010:

imageWe know that there’s a number of you out there who have outbound firewall rules in place on your corporate infrastructures that are based on IP address whitelisting. So if you want to make Service Bus or Access Control work, you need to know where our services reside.

image7223Below is the current list of where the services are deployed as of today, but be aware that it’s in the nature of cloud infrastructures that things can and will move over time. IP address whitelisting strategy isn’t really the right thing to do when the other side is a massively multi-tenant infrastructure such as Windows Azure (or any other public cloud platform, for that matter)

  • Asia (SouthEast): 207.46.48.0/20, 111.221.16.0/21, 111.221.80.0/20
  • Asia (East): 111.221.64.0/22, 65.52.160.0/19
  • Europe (West): 94.245.97.0/24, 65.52.128.0/19
  • Europe (North): 213.199.128.0/20, 213.199.160.0/20, 213.199.184.0/21, 94.245.112.0/20, 94.245.88.0/21, 94.245.104.0/21, 65.52.64.0/20, 65.52.224.0/19
  • US (North/Central): 207.46.192.0/20, 65.52.0.0/19, 65.52.48.0/20, 65.52.192.0/19, 209.240.220.0/23
  • US (South/Central): 65.55.80.0/20, 65.54.48.0/21, 65.55.64.0/20, 70.37.48.0/20, 70.37.64.0/18, 65.52.32.0/21, 70.37.160.0/21


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team announced Windows Azure Platform Benefits for MSDN Subscribers “have been extended to 16 months!” in a 10/25/2010 update:

The Windows Azure Platform is an internet-scale cloud computing and services platform hosted in Microsoft data centers. The platform provides a range of functionality to build applications from consumer web to enterprise scenarios. MSDN subscribers can take advantage of this platform to build and deploy their applications.

Introductory 16-month Offer Available Today

image

Estimated savings: $2,518 (USD)

What to Know Before You Sign Up

The first phase of this introductory offer will last for 8 months from the time you sign up, then you will renew once for another 8 months. After that, you'll cancel this introductory account and sign up for the ongoing MSDN benefit (see below) based on your subscription level. Here's how to manage your account settings after you've signed up:

  1. Sign in to the Microsoft Online Services Customer Portal.
  2. Click on the Subscriptions tab and find the subscription called “Windows Azure Platform MSDN Premium”.
  3. Under the Actions section, make sure one of the options is “Opt out of auto renew”.  This ensures your benefits will extend automatically.  If you see “Opt in to auto renew” instead, select it and click Go to ensure your benefits continue for another 8 months.
  4. After your first 8 months of benefits have elapsed (you can check your start date by hovering over the “More…” link under “Windows Azure Platform MSDN Premium” on this same page), you will need to come back to this page and choose “Opt out of auto renew” so that your account will close at the end of the 16-month introductory benefit period.  If you keep this account active after 16 months, all usage will be charged at the normal “consumption” rates.

You'll need your credit card to sign up. If you use more than the amount of services included with your MSDN subscription, we'll bill your card for these overages. You can visit the Microsoft Online Services Customer Portal to look up your usage at any time.

* Available for signup today in the countries listed.  Not available to Empower for ISV members; only available to the 3 Technical Contacts for Certified Partners and Gold Certified Partners.

Future Subscriber Benefits (starting November 2010 or later)

image

** Not available to subscribers currently participating in the introductory offer. Not available to Empower for ISV members; only available to the 3 Technical Contacts for Certified Partners and Gold Certified Partners.


Rob Conery prefaced his Introducing the New Authorize.NET SDK post of 10/25/2010 with “I've never had an easy time with payment gateways. Their APIs tend to be ridiculously verbose and written by engineers who like ... writing verbose APIs. When I was approached to design and build the new Authorize.NET SDK - I jumped at the chance. I do hope you'll like it.”:

image The Money Shot

Rather than drag you through pre-amble, here's all you need to do to charge your customers:

image

image This is a console application - and it's showing you how Authorize.NET's Advanced Integration Method (or AIM) works. I managed to get the meat of the transaction complete in 3 lines of code - pretty slick if you ask me!

Download the SDK here.

I worked with the crew at LabZero and it was an all-out "platform blitz": PHP, Ruby and Rails, Java, and .NET. We had some fun and friendly competition to see who could roll the cleanest SDK - and I think I topped the Ruby one pretty handily :). Anyway - back to the code.

Design Principle One: Testability

Payment system SDKs and sample code usually don't take testing into account and I wanted to change that. I don't like how invasive gateway code can be - so I kept it as light and nimble as possible.

There are 3 core interfaces that you work against - only one that you need to know about (the gateway itself). The first is IGateway:

image

This interface logs you into Authorize.NET and sends off an IGatewayRequest, returning an IResponse. There are 4 IGatewayRequests:

  • AuthorizationRequest: This is the main request that you'll use. It authorizes and optionally captures a credit card transaction.
  • CaptureRequest: This request will run a capture (transacting the money) on an previously authorized request.
  • CreditRequest: Refunds money
  • VoidRequest: Voids and authorization.

The essential flow is:

  • Create the Gateway
  • Create your Request
  • Send() your request through the Gateway, and get an IResponse back
  • The IResponse will tell you all about your transaction, including any errors …

Rob continues with “Design Principle Two: Smaller Method Calls,” “Design Principle Three: Love the Dev With Helpers,” “ApiFields: An On-The-Fly Reference API,” “ASP.NET MVC Reference Application,” and:

Obligatory Hedge

This isn't a "grand release" with a ton of hoopla. It's the first drop, and I will be refining things and working out bugs as we move on. I'm also building on some more functionality - so drops in the future are coming. If you do find an issue - you want to be sure to report it to the support forums (not me - LabZero is tracking all of this stuff rather tightly).

Hope you enjoy.


Sacha Dawes reported Windows Azure Monitoring Management Pack Release Candidate (RC) Now Available For Download in a 10/25/2010 post to TechNet’s Nexus SC: The System Center Team blog:

image We are pleased to announce the availability of the Release Candidate (RC) of the Monitoring Management Pack (MP) for Windows Azure (version 6.1.7686.0), which can be immediately deployed by customers running System Center Operations Manager 2007 and Operations Manager 2007 R2.

imageAs more customers adopt Windows Azure to develop and deploy application, we know that integrating the management of those applications into their existing console is an important requirement. This management pack enables Operations Manager customers to monitor the availability and performance of application that are running on Windows Azure.

image Customers should head to the MP page on Microsoft Pinpoint to find more information on the MP, as well as download the bits (due to replication times, some customers may have to wait up to 24 hours from this notice before being able to download the bits).  Customers who download the MP will be supported by Microsoft in their production deployments.

To answer some immediate questions on the management pack and its capabilities:

Q. What does the Windows Azure MP do?
A. The Windows Azure MP includes the following capabilities:

  • Discovery of Windows Azure applications.
  • Status of each role instance.
  • Collection and monitoring performance information.
  • Collection and monitoring of Windows events.
  • Collection and monitoring of the .NET Framework trace messages from each role instance.
  • Grooming of performance, event, and the .NET Framework trace data from Windows Azure storage account.
  • Change the number of role instances.

Q. Why is the Azure Management Pack a Release Candidate (RC) and not RTW?
A. We have released this as an RC as we work to implement new monitoring scenarios in the cloud.

Q. When will the RTW version of the Azure Management Pack be available?
A. We expect to make it available in H1 CY2011

Q. Is this MP supported in production deployments?
A. Yes the Release Candidate of the Azure Management Pack will be supported for customers using it in production deployments.


Roberto Bonini continued his Windows Azure Feedreader Episode 7: User Subscriptions series on 10/25/2010:

imageEpisode 7 is up. I somehow found time to do it over several days.

There are some audio issues, but they are minor. There is some humming noises for most of the show. I’ve yet to figure out where they came from. Apologies for that.

This week we

  • clear up our HTML Views
  • implement user subscriptions

We aren’t finished with user subscriptions by any means. We need to modify our OPML handling code to take account of the logged in user.

Enjoy:

image

Remember, you can head over to vimeo.com to see the show in all its HD glory.

Next week we’ll finished up that user subscriptions code in the OPML handling code. 

And we’ll start our update code as well.


Chirag Mehta (@chirag_mehta) analyzes The Future Of BI In The Cloud in this 10/25/2010 essay:

image Actual numbers vary based on whom you ask, but the general consensus is that the Business Intelligence (BI) and Analytics in the cloud is a fast growing market. IDC expects a compounded annual growth rate (CAGR) of 22.4% through 2013. This growth is primarily driven by two kinds of SaaS applications. The first kind is a purpose-specific analytics-driven application for business processes such as financial planning, cost optimization, inventory analysis etc. The second kind is a self-service horizontal analytics application/tool that allows the customers and ISVs to analyze data and create, embed, and share analysis and visualizations.

The category that is still nascent and would require significant work is the traditional general-purpose BI on large data warehouses (DW) in the cloud. For the most enterprises, not only all the DW are on-premise, but the majority of the business systems that feed data into these DW are on-premise as well. If these enterprises were to adopt BI in the cloud, it would mean moving all the data, warehouses, and the associated processes such as ETL in the cloud. But then, the biggest opportunities to innovate in the cloud exist to innovate the outside of it. I see significant potential to build black-box appliance style systems that sit on-premise and encapsulate the on-premise complexity – ETL, lifecycle management, and integration - in moving the data to the cloud.

Assuming that the enterprises succeed in moving data to the cloud, I see a couple of challenges, if treated as opportunities, will spur the most BI innovation in the cloud.

Traditional OLAP data warehouses don’t translate well into the cloud:

The majority of on-premise data warehouses run on some flavor of a relational or a columnar database. The most BI tools use SQL to access data from these DW. These databases are not inherently designed to run natively on the cloud. On top of that, the optimizations performed on these DW such as sharding, indices, compression etc. don’t translate well into the cloud either since cloud is a horizontally elastic scale-out platform and not a vertically integrated, scale-up, system.

The organizations are rethinking their persistence as well as access languages and algorithms options, while moving their data to the cloud. Recently, Netflix started moving their systems into the cloud. It’s not a BI system, but it has the similar characteristics such as high volume of read-only data, a few index-based look-ups etc. The new system uses S3 and SimpleDB instead of Oracle (on-premise). During this transition, Netflix picked availability over consistency. Eventual consistency is certainly an option that BI vendors should consider in the cloud. I have also started seeing DW in the cloud that uses HDFS, Dynamo, and Cassandra. Not all the relational and columnar DW systems will translate well into NoSQL, but I cannot overemphasize the importance of re-evaluating persistence store and access options when you decide to move your data into the cloud.

Hive, a DW infrastructure built on top of Hadoop, is a MapReduce meet SQL approach. Facebook has a 15 petabytes of data in their DW running Hive to support their BI needs. There are a very few companies that would require such a scale, but the best thing about this approach is that you can grow linearly, technologically as well as economically.

The cloud does not make it a good platform for I/O intensive applications such as BI:

One of the major issues with the large data warehouses is, well, the data itself. Any kind of complex query typically involves an intensive I/O computation. But, the I/O virtualization on the cloud, simply does not work for large data sets. The remote I/O, due to its latency, is not a viable option. The block I/O is a popular approach for I/O intensive applications. Amazon EC2 does have block I/O for each instance, but it obviously can’t hold all the data and it’s still a disk-based approach.

For BI in the cloud to be successful, what we really need is ability for scale-out block I/O, just like scale-out computing. Good news is that there is at least one company, Solidfire, that I know, working on it. I met Dave, the founder, at the Structure conference reception. He explained to me what he is up to. Solidfire has a software solution that uses solid state drives (SSD) as scale-out block I/O. I see huge potential in how this can be used for BI applications.

When you put all the pieces together, it makes sense. The data is distributed across the cloud on a number of SSDs that is available to the processors as block I/O. You run some flavor of NoSQL to store and access this data that leverages modern algorithms and more importantly horizontally elastic cloud platform. What you get is commodity and blazingly fast BI at a fraction of cost with pay-as-you-go subscription model.

Now, that’s what I call the future of BI in the cloud.

Chirag says:

In my current role I am helping SAP explore, identify, and execute the growth opportunities by leveraging the next generation technology platform - in-memory computing, cloud computing, and modern application frameworks - to build simple, elegant, and effective applications to meet the latent needs of the customers.


Marketwire published on 10/25/2010 a Zuora Enables ISV to Rapidly Deploy, Meter, Price, and Bill for Windows Azure Platform Cloud Solution press release that also describes the Zuora Toolkit for Windows Azure:

Company News

  • image Zuora, the leader in subscription billing, today announced that sharpcloud has successfully implemented the Zuora Z-Commerce for the Cloud platform in 20 days. sharpcloud provides visual, social roadmapping which is used by customers to manage knowledge and networks around their strategic planning and innovation programs. sharpcloud is developed and hosted on Windows Azure.

  • imagesharpcloud leveraged Windows Azure's platform and tools to bring their solution to life, and quickly set about the task of monetizing, namely, enabling sharpcloud for subscriptions. It quickly became clear that building such a complicated solution in-house was not an option. 

  • image Through the Microsoft Windows Azure and BizSpark One programs, sharpcloud was able to get started quickly with the Zuora team, which was already well versed and integrated into the Windows Azure ecosystem.

  • sharpcloud is now using Zuora to enable billing, payments and subscription management for its Windows Azure-based application, specifically:

    • Multiple pricing plans to sell to individuals and businesses

    • Support for multiple currencies -- GBP, USD, and Euros

    • Online commerce capabilities

    • Support for credit cards; PCI compliance

    • Seamless integration to PayPal for payment processing

    • A platform to handle subscription orders from different partners, including Fujitsu

  • sharpcloud deployed the Zuora Toolkit for Windows Azure to dramatically accelerate time-to-market and automate the pricing and web orders for its Windows Azure platform cloud-based application with seamless integration to PayPal.

Commentary

  • "Microsoft and BizSpark One have given us the tools, platform and program to research, develop and release to web our social business application," said Sarim Khan, CEO and Co-Founder, sharpcloud. "Zuora has filled the missing link by enabling billing, payments and subscription management for our application, giving us the flexibility to price and package and in the future expand to new markets."

  • "Cloud computing has dramatically improved the way that ISVs can develop, deliver and support innovative business solutions," said Michael Maggs, senior director, Windows Azure partner strategy at Microsoft. "Zuora enables Windows Azure developers to rapidly monetize those applications with the right billing infrastructure so they can spend more effort on building successful cloud businesses."

  • "Billing is the lifeblood of ISVs in the cloud, and Zuora is the leader in helping companies like sharpcloud rapidly monetize their solutions on Windows Azure," said Shawn Price, president at Zuora. "Zuora is committed to working closely with Microsoft to deliver on the cloud commerce capabilities that the more than 10,000 Windows Azure developers and customers require."

Enabling the Cloud Computing Business Model on Windows Azure

  • Based on Zuora's industry-leading on-demand subscription billing and commerce platform, Z-Commerce for the Cloud enables the new business models introduced by cloud computing such as usage and pay-as-you-go pricing as their existing infrastructures were built for one-time and perpetual pricing models. Prior to Zuora, cloud providers and ISVs were could not deliver on the need to meter, price, and bill for cloud services -- the heart of the new cloud business model.

  • At this week's Microsoft VC Summit, Zuora CEO Tien Tzuo will be presenting a session that illustrates this shift entitled "Azure: Microsoft Cloud Computing Strategy - How to Successfully Build, Deploy and Monetize the Windows Azure Platform" that will describe how cloud computing is creating significant opportunities for ISVs to build and monetize a range of new applications with Zuora and Microsoft technologies.

Zuora Toolkit for Windows Azure

  • To drive success and adoption for the Windows Azure ecosystem, Zuora has delivered the Zuora Toolkit for Windows Azure in conjuncture with Microsoft to enable developers and ISVs to easily automate commerce from within their Windows Azure application and/or website in a matter of minutes. With the Zuora Toolkit for Windows Azure, developers and ISVs can:
    • create flexible price plans and packages;
    • support usage and pay-as-you-go pricing models; 
    • initiate a subscription order online;
    • accept credit cards with PCI Level 1 compliance; and 
    • manage the customers, recurring subscriptions, and invoicing.


Elizabeth White asserted “Global Alliance With Microsoft Enables Innovative Delivery of Banking Services” in a deck to her Misys Collaborates with Microsoft to Extend Financial Apps to the Cloud post of 10/25/2010:

Misys plc, the global application software and services company, announces a new strategic alliance with Microsoft Corp. This new initiative builds on last year's mission-critical applications development alliance and will deliver Misys' banking and capital markets applications via the Windows Azure cloud platform. The technical collaboration with Microsoft, announced at Sibos 2010 in Amsterdam, will provide financial institutions with the choice and flexibility they need to maximise the return on their IT investment and deliver innovative services to their customers more rapidly.

imageFinancial institutions typically depend on a multitude of applications and systems that are integrated with customers, partners and external financial networks. Running these applications requires complex data centre and support structures that are expensive to operate. Cloud computing, and specifically Windows Azure, enables banks to move from a capital intensive cost model to one which is based on the consumption of technology. No longer will banks need to over-order computing resources because the scale of the Azure platform allows high volume workloads such as end-of-day processing to be consumed on demand.

image Misys and Microsoft have successfully deployed instances of the Misys BankFusion Universal Banking solution to the Windows Azure platform. The Misys solution is built on state-of-the-art BankFusion technology, which adheres to a rigorous set of standards but is unconstrained by proprietary infrastructure, which makes it possible to run the solution in the cloud. Both companies have received significant interest from banks looking to reduce complexity and operational risks by running their banking systems in the cloud.

"The combination of BankFusion, the most advanced financial services platform on the market today, and the innovative Windows Azure cloud computing infrastructure is world-beating," said Al-Noor Ramji, EVP and General Manager, Misys. "New banking solutions must reduce operational costs. By making our solutions available in the cloud, we are enabling our clients to benefit from increased agility with lower TCO and risk, while simultaneously providing them with unprecedented speed and flexibility with access to the latest solutions. The initiative lets banks concentrate once again on the business of banking."

"This is a very exciting time for the financial services industry," said Karen Cone, general manager, Worldwide Financial Services, Microsoft. "Our enterprise cloud computing expertise, coupled with the industry-leading solutions from Misys, brings a unique value proposition to the sector. Through strategic engineering alliances with industry leaders such as Misys, we are focused on delivering both the on-premise and cloud-based solutions that our customers need to gain the benefits of cloud services on their own terms. They are able to leverage and extend their existing IT investments to take advantage of cloud computing, resulting in a reduction of cost and the ability to enhance operations through cloud-based improvements and build transformative applications that create new business opportunities."

"Globally, large banks face increasing demands for innovative, scalable services across business units to meet both competitive forces and marketplace expectations. At the same time, smaller banks are beginning to understand the value of investing in operations, processes, and technologies that make them more flexible and nimble," said Rodney Nelsestuen, Senior Research Director, TowerGroup. "Whether large or small, today's financial institutions must seek to improve operations while not losing sight of the need to manage costs closely. "To that end, TowerGroup has witnessed a growing interest in new variable cost models and on demand service models such as those emerging in cloud computing, or newer forms of managed and shared services, and outsourcing across a variety of technologies and services. These emerging approaches offer large banks the opportunity to leverage scale while smaller banks can compete effectively through shorter time to market and lower upfront investment."

The collaboration between Microsoft and Misys demonstrates that the financial services industry is now moving to the next generation of banking platforms. Many financial institutions already run finished services in the cloud such as Microsoft Exchange, Microsoft Office and Microsoft Dynamics CRM Online solutions. This news extends the Microsoft cloud capability to banking applications.


Tim Anderson (@timanderson) published An honest assessment of Windows Phone 7 on 10/25/2010:

imageI’ve been using Windows Phone 7 for a week and a half now, in the shape of an HTC Mozart on Orange. So what do I think?

I am not going to go blow-by-blow through the features – others have done that, and while it is important to do, it does not convey well what the phone is like to use. Instead, this is my first impression of the phone together with some thoughts on its future.

image

First, it is a decent smartphone. Take no notice of comments about the ugliness of the user interface. Although it looks a little boxy in pictures, in practice it is fun to use.

Some things take a bit of learning. For example, There is a camera button on the phone, and a full press on this activates the camera from almost anywhere. Within the camera, a full press takes a picture, but a half press or a press and hold activates autofocus. I did not find this behaviour immediately intuitive, but it is something you get used to.

There is plenty to like about the phone. This includes the dynamically updating tiles; the picture hub and the ability to auto-upload pictures to Skydrive, Microsoft’s free cloud storage; and neat touches such as the music controls which appear over the lock screen when you activate the screen during playback; or the Find your Phone feature which can ring your phone loudly even if it is set to silent, or lock the phone and add an if found message.

The People hub is fabulous if you use Facebook. I don’t use Facebook much, but even with my limited use, I noticed that as soon as I linked with Facebook, the phone felt deeply personalised to me, with little pictures of people I know in the People tile. The ability to link two profiles to one contact is good.

imageI also like the Office hub which includes Sharepoint workspace mobile – useful for synching content. Microsoft should push this hard, especially as Office 365, which includes hosted Exchange and Sharepoint, gains users.

There are some excellent design touches. For example, many apps have a menu bar with icons at the foot of the screen. There are no captions, which saves space, but by tapping a three-dot icon you can temporarily display captions. In time you learn them and no longer need to.

The pros and cons of hubs

Microsoft has addressed what is a significant issue in other smartphones: how to declutter the user interface. Windows Phone 7 hubs collect several related apps and features (between which there is no sharp difference) into a multi-page view. There are really six hubs:

  • People
  • Pictures (includes the camera)
  • Music and videos
  • Marketplace
  • Office
  • Games

I like the hubs in general; but there are a few issues. Of the hubs listed above, four of them work well: People, Pictures, Music/Videos, and Games. Marketplace is not really a hub any more than “phone” is a hub – it is just a way to access a single feature. Office is handy but it is not a hub gathering all the apps that address a particular area; it is a Microsoft brand. If I made a word processor app I could not add it to the Office hub.

Further, operators and OEMs can add their own hubs, but will most likely make bad decisions. There is a pointless HTC hub on my device which combines weather and featured apps. It also features a dizzying start-up animation which soon gets tired. I have no idea what the HTC hub is meant to do, other than to promote the HTC brand.

Speaking of brands, I have deliberately left the home screen on my Mozart as supplied by Orange. As you can see from the picture above, Orange decided we would rather see four Orange apps occupy 50% of the home screen (before you scroll down), than other features such as web browsing, music and video, pictures and so on. Why isn’t Orange a hub so that at least all this stuff is in one place?

The user can modify the home screen easily enough, and largely remove the Orange branding. But to get back to my point about hubs: it is not clear to me what a hub is meant to be. It is not really a category, because you cannot create hubs or add and remove apps from them, and because of the special privileges given to OEMs we get nonsense like the HTC hub, alongside works of art like the Pictures hub.

There is still more good than bad in the hub concept, but it need work.

Not enough features?

I have no complaint about lack of features in this first release of Windows Phone 7. Yes, I would like tethering. Yes, I would like the ability to copy an URL from the web browser to the Twitter client. But I am happy with the argument that Microsoft was more concerned with getting the foundation right, than with supplying every possible feature in version one.

I am less happy with the notion that Microsoft can afford for the initial devices to be a bit hopeless, and fix it up in later versions. I am not sure how much time the company has, before the world at large just presumes it cannot match iPhone or Android and forgets Microsoft as a smartphone company.

Is it a bit hopeless, or very good at what it does? I am still not sure, mainly because I seem to have had more odd behaviour than some other early adopters. Example: licence error after downloading from marketplace; apps that don’t open or which give an error and inform me that they have to close; black screens. A few times I’ve had to restart; once I had to remove the battery – thank you HTC Notes, which has been updated and now does not work at all. It is possible that there is some issue with my review device, such as faulty RAM, or maybe the amount of memory in a Mozart is inadequate. I am going to assume the former, but await other reports with interest.

The one area where Windows Phone 7 is weak is in app availability. I would like a WordPress app, for example. Clearly this will fix itself if the device is popular, though there are some issues facing third-party developers which will impede this somewhat.

App Development and the Marketplace

The development platform for third parties is meant to be Silverlight and XNA, two frameworks based on .NET which address general apps and games respectively. These are strong platforms, backed up by Visual Studio and the C# programming language, so not a bad development story as far as it goes.

That said, there are a couple of significant issues here. One is that third-party apps do not have access to all the features of the phone and cannot multi-task. Switch away from an app and it dies. This can result in a terrible user experience. For example, I fire up the impressive game The Harvest. Good though it is, it takes a while to load. Finally it loads and play resumes from where I got to last time. I’m just wondering what to tap, when the lock screen kicks in – since I have not tapped anything for a bit (because the game was loading), the device has decided to lock. I flick back the lock. Unfortunately the game has been killed, and starts over with resume and a long loading process.

The other area of uncertainly relates to native code development. C/C++ and native code is popular for mobile apps. It is efficient, which is good for devices with constrained resources; and while native code is by definition not cross-platform, large chunks of the code for one platform will likely port OK to another.

Third party developers cannot do native code development for Windows Phone 7. Or can they? Frankly, I have heard conflicting reports on this from Microsoft, from developers, and even from other journalists.

At the beginning, when the Windows Phone 7 development platform was announced at the Mix conference last year, it was stated that the only third-party developers allowed to use native code were Adobe, because Microsoft wants Flash on the device, operators and OEM hardware vendors. At the UK reviewer’s workshop, I was assured by a Microsoft spokesperson that this is still the case, and that no other third parties have been given special privileges.

I am sceptical though. I expect important third parties like Spotify will use native code for their apps, and/or get access to additional APIs. If you have a good enough relationship with Microsoft, or an important enough app, it will be negotiable.

In fact, I hope this is the case; and I also expect that there will be an official, public native code SDK for the device within a year or two.

As it is, the situation is unsatisfactory. I dislike the idea that only operators and OEMs can use native code – especially as this group does not have the best track record for creating innovative and useful apps. I have more confidence in third party developers to come up with compelling apps than operators or hardware vendors – who all too often just want to plug their brand.

I also think the Marketplace needs work. If I search marketplace, I want it filtered to apps only by default, but for some reason the search covers music and video as well, so If I search for a twitter client, I get results including a song called Hit me up on Twitter. That’s nonsense.

I wonder if the submission process is a too lax at the moment, because of Microsoft is so anxious to fill Marketplace with apps. I suppose there will always be too many lousy apps in there, on this and other platforms. Still, while nobody likes arbitrary rejections, I suspect Microsoft would win support if it were more rigorous about enforcing standards in areas like how well apps resume after they are killed by the operating system, and in their handling of the back button, two areas which seem lacking at the moment.

Complaints and annoyances

One persistent annoyance with the HTC Mozart is the proximity of menu bar which appears at the bottom of many apps, with the with “hardware” buttons for back, start, and search which are compulsory on all Windows Phone 7 devices. The problem is that on the Mozart, these buttons are the same as app buttons, triggered by a light touch. So I accidentally hit back, start or search instead of one of the menu buttons. I have similar issues with the onscreen keyboard. I’m learning to be very very careful where I tap in that region, which makes using the device less enjoyable.

Another annoyance is the unpredictability of the back button. I am often unsure whether this is going to navigate me back within an app, or kick me out of the app.

Some of the apps are poor or not quite done. This will sort itself presuming the phone is not a complete flop. For example, in Twozaic, when typing a tweet, the post button is almost entirely hidden by the keyboard. I would like an Android style close keyboard button (update: though the back button should do this consistently).

I have already mentioned problems with bugs and crashes, which I am hoping are specific to my device.

It seems to me that Microsoft has taken a look at Apple’s extraordinarily profitable approach to devices and thought “We want some of that.” The device is equally as locked down as an iPhone – except that in Apple’s case there are no OEMs to disrupt the user experience with half-baked apps, and operators are also prevented from interfering. With Windows Phone we kind-of have the worst of both worlds: operators and OEMs can spoil the phone’s usability – though this is constrained in that clued-up users can get rid of what they do not want – but we are still restricted from doing things like attaching the phone as USB storage.

Still not completely fixed – the OEM problem

My final reflection (for now) is that Windows Phone 7 still reflects Microsoft’s OEM problem. This device matters more to Microsoft than it does either to the operators or the OEM hardware vendors – who have plenty to be getting on with other mobile operating systems. In consequence, the launch devices do not do justice to the capabilities of Windows Phone 7, and in some cases let it does badly. I do not much like the HTC Mozart, and suspect that HTC just has not given the phone the attention that it needed.

One solution would be for Microsoft to make its own device. Another would be for some hardware vendor to come up with a superb device that would make us re-evaluate the platform. Those with long memories will recall that HTC did this for Windows CE, with the original iPAQ, the first devices using that operating system which performed satisfactorily.

HTC could do it again, but has not delivered with the Mozart, or I suspect with its other launch devices.

I have also noted issues with way Orange has customised my device, which is another part of the same overall issue.

Despite Microsoft’s moves to mitigate its OEM problem, by enforcing consistency of hardware and by (mostly) retaining control over the user interface, it is still an area of concern.

Related posts:

  1. Want a Windows Phone 7? Here are the choices and costs in the UK
  2. Windows Phone 7 incompatibility may drive developers elsewhere
  3. Windows Phone 7 battles indifference in London

Clearly, the success of the WP7 will have a major influence on new Windows Azure and SQL Azure deployments as mobile clients become an increasing consumer of cloud-based apps.


Avkash Chauhan explained How to change the VM size for your Windows Azure Service in a 10/25/2010 post:

image As you may know, Azure VM size is set in 4 different catagories:

  1. Small  - 250GB
  2. Medium - 500GB
  3. Large - 1TB
  4. ExtraLarge - 2TB

imageIt is possible that after your service is running and you might need to tune your service to add more space in Azure VM or reduce the VM size to save some cost if resources are higher then your current service need. This concept is defined as "elastic scale" and Windows Azure takes this concept very well.

Here are a few ways you can accomplish it:

1. Auto-Scale using Azure Service Management API.:

There is a great blog on how to accomplish it: http://blogs.msdn.com/b/gonzalorc/archive/2010/02/07/auto-scaling-in-azure.aspx

2. In Place Upgrade for your Azure Service:

  1. You can dynamically upload a new package with a higher VM size to Staging slot first
  2. Perform VIP swap so your service can use the higher VM size Service
  3. Delete the Deployment in Staging slot otherwise you may pay for both slots.


Eric Knorr (pictured below) asserted “Even as Microsoft rolls out its Office 365 cloud offering, the company stubbornly doubles down on the desktop” as a deck for his What Office 365 says about Microsoft post of 10/25/2010 to InfoWorld’s Modernizing IT blog:

image On the day Microsoft announced Office 365, Kurt DelBene, president of the Microsoft Office Division, said: "This resets the bar for what people will expect of productivity applications in the cloud."

Oh, Microsoft. Why must you say these things?

image [ Also on InfoWorld: Read Woody Leonard's excellent analysis of why Ray Ozzie left Microsoft. | Then have a look at Neil McAllister's comparative review of "Office suites in the cloud: Microsoft Office Web Apps versus Google Docs and Zoho." ]

image We all know that Office 365 is basically an upgrade and repackaging of BPOS (Business Productivity Online Suite), which consists of Microsoft-hosted versions of SharePoint, Exchange, and Office Communications. The obvious difference is that 365 adds -- drumroll, please -- Office Web Apps.

In the heat of last Tuesday's announcement, some people jumped to the conclusion that the addition of Web Apps meant, at long last, that Microsoft had an answer to Google Apps. The truth: Not anymore than it already did. You can already use Office Web Apps and SkyDrive for free on the Office Live site; BPOS is available separately for $10 per user per month. Office 365 wraps the two together -- so how exactly is the bar being "reset"?

Cloudy with a chance of misinformation

To be fair, the private beta program for Office 365 has just started, and Office 365 will probably be considerably better than BPOS, with online 2010 versions of SharePoint, Exchange, and Office Communications -- the last renamed Lync Server and, according to InfoWorld contributor J. Peter Bruzzese, vastly improved.

But guess what else is part of Office 365? Office Professional Plus, a desktop product. So not only does Office 365 fail to "reset the bar" for productivity applications in the cloud, its main productivity applications aren't in the cloud at all. As before, Office Web Apps are intended to be browser-based extensions to the desktop version of Office.

Office 365 will actually come in two flavors: an enterprise version that includes Office Professional Plus and a small-business version that does not. Wait, does that mean you can use the small-business version of Office 365 without a locally installed version of Office? Nope. Check the system requirements for the small-business version and you will find the following note: "Office 2007 SP2 or Office 2010."

Read more: 2, next page ›

SharePoint Server 2010 in the cloud is needed to make Microsoft Access 2010 Web Databases accessible at reasonable cost to a few hundred or so users.


Jeffrey Schwartz posted Cloud Credo to Microsoft Partners on 10/22/2010 to his Schwartz Report blog for the Redmond Channel Partner newsletter:

Why should Microsoft's partners who sell software to customers feel inclined to sacrifice those revenues and margins in favor of the company's cloud services?

image "If you don't do it, you will be irrelevant in the next four or five years," said Vahé Torossian, corporate vice president of Microsoft's Worldwide Small and Midmarket Solutions and Partners (SMS&P) Group. Jon Roskill, corporate vice president of the Microsoft Worldwide Partner Group, reports to Torossian, and Torossian reports to Chief Operating Officer Kevin Turner.

image Torossian spoke to a group of about 60 partners yesterday at this month's local International Association of Microsoft Channel Partners (IAMCP) meeting held at Microsoft's New York office. "Believe me, I'm a very polite guy, I don't want to be blunt for the sake of being blunt, I am saying that because it starts with us."

imageMicrosoft's competitors, the likes of Google and Saleforce.com, are making this push without the legacy of a software business to protect. Torossian told partners that he believes 30 percent of all of its customers will transition their IT operations to the cloud with or without Microsoft.

"I think it's important that you start to lead the transition and assuming that your customers have already been meeting with your competitor," he said. "We are giving you the opportunity to have the discussion with your customers [where you can say] if you're really interested in the cloud, you'll be able to get there. We want to position ourselves as a leader in the cloud."

It should come as no surprise to those who have been following Microsoft's "we're-all-in" the cloud proclamations these days. But the fact that he responded so bluntly underscores how unabashed Microsoft is in trying to get its message out to partners.

"I think he was just being blatantly honest, I don't think there was any softer way to say it, it's the truth," said Mark Mayer, VP of sales and marketing at Aspen Technology Solutions, a Hopatcong, N.J.-based solution provider. "His most important point, is his competition doesn't' have a legacy business. He's not competing apples to apples."

Howard Cohen, president of the New York IAMCP chapter, said he initially thought Torossian was joking, but quickly realized he was serious. "I think it was refreshing," Cohen said. "He wasn't saying 'you should do this because we want you to,' he was saying 'we're doing this because we have to. The market demands it and you should come along with us.' I think that's an accurate message. I too believe if you ignore cloud services you won't be doing much."

The challenge is for solution providers and partners to deal with the change in a way that allows them to transition and maintain a profitable business, said Neil Rosenberg, president and CEO of Quality Technology Solutions Inc., a Parsippany, N.J.-based partner.

A $50,000 Exchange deployment might translate to a $15,000 consulting engagement to deploy Exchange Online. "Making up the volume is challenging," Rosenberg said.

So what's he doing about it? He's starting to focus more on the application stack, getting involved in more SharePoint and business intelligence development and consulting, for example.

"On the flip side, I don't think infrastructure is going to go away, it's going to gradually reduce in terms of what customers are looking to have in house, and there's going to be a whole separate set of services around management and planning of the infrastructure," Rosenberg said. "Stuff still needs to be managed in the cloud, just in a different way."

As Office365 application migrate to the Windows Azure Platform, they will become an integral part of many Windows Azure and SQL Azure projects.


<Return to section navigation list> 

Visual Studio LightSwitch

Bob Baker posted Microsoft Visual Studio LightSwitch ONETUG Presentations and Sample posted on 10/23/2010:

image22242I had a great time at both the September and October meetings presenting A Lap around Microsoft Visual Studio LightSwitch (currently in Beta), a new and exciting Rapid Application Development environment for business analysts and Silverlight developers.

Microsoft Visual Studio LightSwitch gives you a simpler and faster way to create professional-quality business applications for the desktop, the web, and the cloud. LightSwitch is a new addition to the Visual Studio family. Visit this page often to learn more about this exciting product.

I have posted a zip file containing the two PowerPoint presentations, a database backup, the LightSwitch solution shown at the meetings with some brief instructions on my SkyDrive (which you can get from the link below). I hope you find these resources useful. As always, feel free to contact me if you have any further questions.

Download the sample code from SkyDrive here.


Return to section navigation list> 

Windows Azure Infrastructure

Microsoft, IBM and Oracle share Gartner’s Magic Quadrant for Application Infrastructure for Systematic SOA-Style Application Projects according to this 10/21/2010 report:

Figure 1.Magic Quadrant for Application Infrastructure for Systematic SOA-Style Application ProjectsFigure 1. Magic Quadrant for Application Infrastructure for Systematic SOA-Style Application Projects

Source: Gartner (October 2010)

Market Overview

image In recent years, Gartner has identified a trend in enterprise IT projects away from the best-of-breed middleware selection and toward selecting a sole, or at least a primary, provider of enabling technology for the planned project type. Thus, we have noted the emergence of a new type of market, defined by the requirements of a particular type of IT project, rather than by the taxonomy of vendor offerings (the traditional type of technology markets).

While continuing to analyze markets for specialized products — for example, enterprise application servers, horizontal portals, business process management suites and business intelligence tools — Gartner is also providing analysis of the overall application infrastructure market through the lens of some prevailing use patterns (see "Application Infrastructure Magic Quadrants Reflect Evolving IT Demands"). Buyers in such markets are not looking to invest in a grand, all-encompassing application infrastructure technology stack, but rather are looking for a vendor that understands and supports the kind of project requirements they face.

A systematic SOA-style business application project is one such type of project that is a frequent initiative of mainstream enterprise IT, and will continue to be through the next five years. With this project type, the effort centers on the modeling and design of an SOA-style application topology, and the development of service implementations and user-facing logic (which is often multichannel). The orchestration of new and pre-existing like and unlike services is a key requirement (including some degree of SOA-style integration and governance).

Users and vendors that meet in this market are driven to support systematic software development and deployment projects designed to deliver new and/or composite SOA business applications. The "new" in the project characterization indicates that most software and the data model of the application are newly designed for this project. The "composite" refers to use of pre-existing external services. The "service oriented" in the characterization means that the software architecture will consist of clients, service interfaces and service implementations.

Gartner also offers a separate analysis of the strategic SOA infrastructure projects, where the market is focused on establishing the operational and governance environments for coexistence and interoperation of multiple SOA-style applications. This market does not include in consideration the requirements of building new SOA-style services or clients, and it does not target any one application project, but rather targets the establishment of a long-term infrastructure platform for the current and future SOA-style software resources — internal, purchased, remote (B2B) and cloud-sourced. If your project looks to build a systematic SOA application and, in the process, establish the governance and operational platform for the future SOA-style application projects and acquisitions, we recommend that you examine the "Magic Quadrant for Shared SOA Interoperability Infrastructure Projects" together with the vendor assessments presented in this research.

Gartner also offers analysis of systematic application integration projects. This type of project focuses on the integration of pre-existing software that is resident in a variety of different systems, custom-designed, purchased, contracted as a cloud service or offered by partner enterprises. There is no focus on the ability to construct new applications. There is also no priority for SOA-style integration at the expense of other integration practices. If your project, while building a new SOA application, must substantially interact with non-SOA external resources, then we recommend that you examine the application integration Magic Quadrant, along with this research, to fine-tune your decision process.

Recently, a new category of application infrastructure has emerged. As cloud computing moves toward the mainstream, application infrastructure technology emerges that is designed specifically for the requirements of that use pattern. In this Magic Quadrant, we examine several cloud technology providers to reflect this trend and to acknowledge that, as IT organizations evaluate enabling technology for their projects, platform-as-a-service (PaaS) options compete with traditional on-premises alternatives.

This Magic Quadrant is intended for IT projects that are looking for a single vendor to support all or most of the project requirements end to end. Projects that prefer custom best-of-breed selection of component technologies for their new SOA-style applications should examine multiple Gartner technology-centered Magic Quadrants, including "Magic Quadrant for Enterprise Application Servers," "Magic Quadrant for Horizontal Portals," "Magic Quadrant for Business Process Management Suites" and "Magic Quadrant for Integration Service Providers." …

Read the entire report by Yefim V. Natis, Massimo Pezzini, Jess Thompson, Kimihiko Iijima, Daniel Sholler, Eric Knipp, Ray Valdes, Benoit J. Lheureux, Paolo Malinverno, and Mark Driver here.


Mary Jo Foley analyzed the effect of Microsoft's outgoing Chief Software Architect on the 'post-PC world' in her 10/25/2010 post to ZDNet’s All about Microsoft blog:

Ray Ozzie may be a lame duck at this point, as he will soon be leaving his Chief Software Architect post at Microsoft. But that hasn’t stopped him from publishing an updated assessment of Microsoft’s strategy and products.

imageOn October 25, Ozzie posted to his newly minted blog a memo he sent to his staff and direct reports, entitled “Dawn of a New Day.” In it, Ozzie examines what Microsoft has and hasn’t achieved since he joined the company five years ago and penned his “Internet Services Disruption” memo. (Thanks to Student Partner Pradeep Viswav for the pointer to the latest Ozzie memo.)

imageThe “Dawn of a New Day” memo makes it clear — at least to me — that Ozzie has concerns about Windows. He doesn’t state this as bluntly as I just did. (And maybe the talk I’ve heard about an Ozzie vs. Windows Chief Steven Sinofsky feud is coloring my opinion here.) But you wouldn’t catch any other member of Microsoft’s top brass wondering aloud about the rightful reigning place of PCs in the future. Microsoft’s official public stance is Windows PCs are now and will stay at the center of the computing universe, no matter what kinds of new devices become popular.

image In his new memo, Ozzie described the “post-PC world” he sees coming — a world of continuous services and connected devices. He noted that early adopters have “decidedly begun to move away from mentally associating our computing activities with the hardware/software artifacts of our past such as PC’s, CD-installed programs, desktops, folders & files.”

The PC client and PC-based server models have become immensely complex because of a number of factors, Ozzie argued, including how broad and diverse the PC ecosystem has become and how complex it has become to “manage the acquisition & lifecycle of our hardware, software, and data artifacts,” Ozzie said.

I doubt the Windows management would state things this way, but there is some evidence they realize this as well. Microsoft has been trying to detangle the ever-growing body of Windows code via projects like MinWin, and is making noises about simplifying the acquisition of software and services via a Windows app store in Windows 8.

But will those efforts be enough and happen quickly enough? More from Ozzie’s latest memo:

“It’s undeniable that some form of this (PC) complexity is readily apparent to most all our customers: your neighbors; any small business owner; the ‘tech’ head of household; enterprise IT.

“Success begets product requirements. And even when superhuman engineering and design talent is applied, there are limits to how much you can apply beautiful veneers before inherent complexity is destined to bleed through.

“Complexity kills. Complexity sucks the life out of users, developers and IT. Complexity makes products difficult to plan, build, test and use. Complexity introduces security challenges. Complexity causes administrator frustration.”

He notes that there’s a flip side of complexity: It also provides some gaurantee of longevity because of the interdependencies it creates. You can’t just flip a switch and get rid of something that is so deeply embedded in your work and home life.

Ozzie isn’t predicting the PC is going away overnight. “The PC and its ecosystem is going to keep growing, and growing, for a long time to come,” he opined. But if and when the post-PC world arrives, users and vendors need to be ready for it, he said.

Connected devices in Ozzie’s view, are not the PCs of today. While some ultimately may look like today’s desktop PCs or laptops, they’ll be more like embedded devices, optimized for varying purposes, he said.

These next-gen devices, according to Ozzie, will “increasingly come in a breathtaking number of shapes and sizes, tuned for a broad variety of communications, creation & consumption tasks. Each individual will interact with a fairly good number of these connected devices on a daily basis – their phone / internet companion; their car; a shared public display in the conference room, living room, or hallway wall.”

“Indeed some of these connected devices may even grow to bear a resemblance to today’s desktop PC or clamshell laptop,” Ozzie continued. “But there’s one key difference in tomorrow’s devices: they’re relatively simple and fundamentally appliance-like by design, from birth. They’re instantly usable, interchangeable, and trivially replaceable without loss. But being appliance-like doesn’t mean that they’re not also quite capable in terms of storage; rather, it just means that storage has shifted to being more cloud-centric than device-centric. A world of content – both personal and published – is streamed, cached or synchronized with a world of cloud-based continuous services.”

Ozzie’s latest missive made it clearer, in my view, why he is leaving Microsoft. While there are some — many, perhaps — at the company who see things the way Ozzie does, I am doubtful that CEO Steve Ballmer and favored son Sinofsky do. Yes, Microsoft is pouring lots of marketing and development dollars into mobile and R&D, but decisions like prohibiting OEMs from preloading the more-touch-centric Windows Phone operating system on slates and tablets says to me that protecting the Windows PC fiefdom is Rule No. 1 in Redmond.

Secondly, if you look back at Ozzie’s original Internet Services Disruption memo, some key changes for which he advocated haven’t occured at all. Five years ago, Ozzie said that Microsoft needed to increase the tempo of delivery for both the base OS experiences and the additivie experiences and services that it delivered via its platforms division. Windows Vista was released to manufactuirng in 2006; Windows 7 in 2009. It looks like Windows 8 is on a track to hit in 2012. (However, it’s looking like the Internet Explorer team may finally decouple its delivery schedule from Windows’; rumor has it the final IE 9 could be out before mid-2011.) Each Windows Live “wave” is as encumbered as Windows itself with planning, processes and procedures, making delivery anything but agile.

Ballmer recently told attendees at a Gartner conference that he considered the company’s riskiest bet to be the next version of Windows. Yes, as a number of you readers have said, every version of Windows is risky because Windows is still Microsoft’s biggest cash cow. There are more than a billion Windows PCs on the planet. Every new version is a “risk” to some degree.

But I can’t help but wonder if the complexity in the OS itself, the PC ecosystem at large (as outlined by Ozzie) and in the competitive landscape also makes Windows 8, especially risky. Will Windows 8 really be an evolutionary release that will keep Windows PCs relevant in the post-PC new world? If so, in what way(s)?


Rob Tiffany asserted Ray Ozzie sees the Dawn of a New Day for Microsoft in this 10/25/2010 post:

rayozzie thumb Ray Ozzie sees the Dawn of a New Day for Microsoft

Five years after Ray Ozzie penned The Internet Services Disruption, he reflects on Microsoft’s move to the cloud.  While he’s most proud of Windows Azure and SQL Azure, he also gives our competitors their due by mentioning that they have out-executed us when it comes to mobile experiences.  He harps on the subject of how complexity kills and then challenges us to close our eyes and form a realistic picture of what a post-PC world might actually look like.

imageRay goes on to state that those who can envision a plausible future that’s brighter than today will earn the opportunity to lead.  His ultimate dream is to move us toward a world of :

  • Cloud-based continuous services that connect us all and do our bidding.  These are websites and cloud-based agents that we can rely on for more and more of what we do.  On the back end, they possess attributes enabled by our newfound world of cloud computing: They’re always-available and are capable of unbounded scale.
  • Appliance-like connected devices enabling us to interact with those cloud-based services.  This goes beyond the PC and will increasingly come in a breathtaking number of shapes and sizes, tuned for a broad variety of communications, creation & consumption tasks.  Each individual will interact with a fairly good number of these connected devices on a daily basis – their phone / internet companion; their car; a shared public display in the conference room, living room, or hallway wall.

As a Mobility Architect at Microsoft, I’m excited that my commitments align with this vision in connecting the Peanut Butter of the Cloud with the Chocolate of devices.  Wireless data networks, bandwidth, latency and signal coverage are the wildcards when it comes to making this vision a reality.  That’s why you’ll always see my concern for this Wireless wildcard reveal itself in all the Cloud-connected mobile architectures I design.

Check out the rest of Ray’s new memo at http://ozzie.net/docs/dawn-of-a-new-day/.

Rob is a Mobility Architect at Microsoft focused on designing and delivering the best possible Mobile solutions for his global customers.  His expertise lies in combining wireless data technologies, device hardware, Windows software, and optimized server infrastructures together to form compelling solutions.


T-10 Media issued a questionable Windows Azure Report Card – Year One on 10/24/2010 to its Azure Support blog:

imageOK, Windows Azure isn’t a year old yet (it only came out of Beta in February) but it is almost year since it was made widely available and demoed at PDC 2009. So before PDC 2010, it[‘]s a good time to reflect on the past year of Azure with a report card for Azure to take home to its Microsoft parent.

Overall Stability, Security and Performance :  A-

image Azure definitely confounded some of its harsher critics by registering a very good track record for uptime, performance and most important – security. Since its launch there have been no major security issues and no large outages. I’ve run Azure since April and the only issue was two short periods (under one hour) of sluggish performance while OS patches were being applied, it seems that most users experiences have been similar to mine.

SQL Azure : D

While the performance of SQL Azure has been good, I can only register my bitter disappointment at the progress of adding features. I noted that probably the biggest weakness of SQL Azure at launch was the lack of any backup facility, we were promised two backup functions (continuous and clone) with one to appear in the first half of 2010 but still no sign of anything, no sign of encryption or compressions either [*]. The features that have been added can only be described as basic – such as a 50GB sized database, or  the ability to automatically upgrade to a larger database size (although we still have to execute a TSQL Alter statement for this).

SQL Azure itself is a solid product but it is still too expensive and lacking in even the basic features SQL Server users require.

Windows Azure Features : C-

Azure is still not heavy in terms of features, which is fine for a product is its first year so it would be merited a B- or C+ had it not been for the omission of .NET 4.0 support (6 months and counting since .NET was released) [**].  Unfortunate for Microsoft the last year has been one of heavy innovation for its main rival – AWS. Most notably, AWS Simple Notification Services (SNS) allows AWS users to send notifications via several formats (even email and SMS) to alert a user of an application of an event (this is a heavily requested feature on Azure but no plans have been confirmed to add it).

Azure Tooling : C+

.NET developers are used to best in class dev tools and so we’d expect great tooling for deploying to Azure. We now have an vastly improved Visual Studio experience which allows for direct deployment  to Azure (which is great since the Azure developer’s portal is still slow and generally a poor user experience). We can also directly connect to SQL Azure databases to view and interact with database objects from SSMS 2008 R2 and generate scripts to create an SQL Azure database. But this is pretty much the minimum we would expect using a Microsoft environment.

In the minus column the Azure portal is still slow and lacking in features, monitoring of the cost is very basic and the are no tools for monitoring the   various running instances of an Azure service. Also, migration tools are pretty lacking – the SQL Azure migration is good to troubleshooting a migration to SQL Azure but won’t be able to solve many of the incompatibilities. No tool exists for migration an ASP.NET app (there are several tutorials on migration in which the process looks relatively simple but there are enough gotchas to make the migration of a reasonably sized ASP.NET app a real headache).

Pricing : D

I’ve given a D for pricing as it is the one area where improvement could easily have been made. Probably the most persistent complaint about Azure is the high cost of the entry. With a single SQL Azure database and only a single compute instance an Azure plan will cost $60 – $90 per month depending on any discounting given if you are a member of BizSpark or MSDN etc. Even worse if you need to rely on Azure’s 99.95% uptime SLA you are required to have 2 compute instances which will comfortably bring the cost to over $100 (imagine your hosting company informing you that you needed a second server if you wanted to have good uptime ). This is a relatively high barrier to entry for small developers who are building new apps which will only initially use a fraction of the small compute instance and the 1GB SQL Azure database allocation. AWS meanwhile now offers an ultra-small instance which costs only $15 per month and in addition they are offering a full free year to new users.

I would have given an E except for the fact that Azure pricing matches AWS whilst offering a lot more [***].  The Azure platform handles all the OS patching, scaling and security without the need for user intervention. AWS by contrast with its infrastructure as a service model merely provides the OS and offers some tools the user can implement (such as the elastic load balancer) to manage scaling, patching and security updates is left to the user.

Conclusion

Overall the grades might not have been too high but it is definitely the grade for stability, security and performance which is the most important since a poor performance would surely mean the death of the Azure platform and the only one area where Azure should definitely have been better is in pricing. We are definitely lacking in tools and features (especially for SQL Azure) but since it is just a year we shouldn’t be too demanding – the real test of Azure will be whether it can innovate over the next 1 -2 years.

* Azure Support published SQL Azure Backup Using Database Copy on 8/28/2010 and noted therein that Azure “data is replicated across 3 geographic locations.” This statement is not correct. The original data and two replicas run on different fault domains in the same data center.

Wayne Walter Berry described Backing Up Your SQL Azure Database Using Database Copy in an 8/25/2010 post to the SQL Azure blog. The post contains the following statement:

The backup is performed in the SQL Azure datacenter using a transactional mechanism without downtime to the source database. The database is copied in full to a new database in the same datacenter. You can choose to copy to a different server (in the same data center) or the same server with a different database name.

I don’t consider $9.95/month for a 1-GB cloud-based RDBMS with built-in high availability to be a high price.

** The Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010) released to the Web on 5/19/2010 supports .NET Framework 3.5 and 4.0.

*** At the time of writing Windows Azure offered a free small instance, table and blob data storage, queue messaging, and three 1-GB SQL Azure databases for a limited duration to MSDN subscribers.

I question the relevance of grades awarded by a site with basic misconceptions of current Windows Azure and SQL Azure features.


Lori MacVittie (@lmacvittie) asserted You may have heard the term “full-proxy architecture” or “dual stacks” thrown around in the context of infrastructure; here’s why that distinction is important as an introduction to her explanation of  Why Single-Stack Infrastructure Sucks of 10/25/2010:

image When the terms “acceleration” and “optimization” in relation to application delivery are used it often evokes images of compression, caching, and similar technologies. Sometimes it even brings up a discussion on protocol optimization, which is really where things get interesting. 

rock-stackYou see, caching and compression techniques are mostly about the content – the data – being transferred. Whether it’s making it smaller (and thus faster) or delivering it from somewhere closer to the user (which also makes it faster) the focus of these acceleration techniques is really the content. But many of the most beneficial optimizations happen below the application data layer, at the transport level and below.

It’s all about the stack, baby. A good one is fast, a bad one, well, isn’t.

But it isn’t just about an optimized stack. Face it, there are like a gazillion different tricks, tips, and cheats for optimizing the stack on every operating system but all of them are peculiar to a specific operating environment. Which is great if you’re an end-user trying to trick out your desktop to download that ginormous file even faster. When it’s not so great is when you’re a web server or a piece of application delivery infrastructure.

ONE SIZE DOES not FIT ALL

So here’s the thing – when you tweak out a single-stack piece of infrastructure for a specific environment you’re necessarily ignoring every other environment. You have to pick and choose what set of optimizations you’re going to use, and you’re stuck with it. If eighty percent of your user-base is accessing an application over “link A” then the other twenty percent are probably going to experience poor performance – and you’ll be lucky if they don’t experience time- outs or resets as well.

imageThis problem (which has been solved by full-proxy, dynamic dual-stack infrastructure for a long time) has reared its ugly head yet again recently with the excitement over virtual network appliances (VNA). You know, a virtual image of your infrastructure components, deployed in the same flexible, rapid manner as your applications. The problem with this is that just slapping a network component into a virtual image results in a less than optimal integration. The component leverages the networking stack of the hypervisor necessarily, which means it is optimized to communicate over a LAN. A low latency, high-throughput, high capacity network connection without a lot of congestion. You know, the kinds of things that make WAN-delivered applications slow, sluggish, and unresponsive.

For the same reasons that a web/application server – regardless of form-factor – can’t be optimized for both LAN and WAN at the same time neither can a VNA. It has a single-stack because that’s what’s underlying the entire system and what’s being interfaced with. It cannot simultaneously address pain points with WAN connected communications and LAN connected communications.

So not only are you incapable with a single-stack infrastructure of optimizing and accelerating on a per-connection basis, when you deploy an infrastructure component in virtualized form (or any other form that results in a single network stack architecture)  you are now incapable of optimizing and accelerating on a per network connection basis. It’s LAN or WAN, baby. Those are your choices.

TRANSLATORS and TRAFFIC COPS

An intermediary is defined as a “mediator: a negotiator who acts as a link between parties”. The analogy of a “translator” is often used to describe the technical functionality of an intermediary, and it’s a good one as long as one remembers that a translator actually does some work – they translate one language to another. They terminate the conversation with one person and initiate and manage conversations with another simultaneously. They are “dual” stacked, if you will, and necessarily must be in order to perform the process of translation.

This is in stark contrast to previous analogies where load balancers and other application delivery focused infrastructure were analogized as “traffic cops.” Traffic cops, when directing traffic, do not interact or otherwise interrupt the flow of traffic very often. They are not endpoints, they are not involved in the conversation except to point out where and when cars are allowed to go. They do not interact with the traffic in the way that a translator does. In fact they use nearly universal hand signals to direct traffic (think transport protocol layer and below) because they are primarily concerned with speed and performance. Their job is to get that car (packet) moving in the right direction and get it out of the way. They don’t care where its going or what its going to do there; traffic cops only care about making sure the car (packet) is on its way.

Translators, intermediaries, care about what is being said and they are adept at ensuring that the conversation is managed properly. Speed and performance are important, but making sure the conversation is accurate and translated correctly is as important to the translator as doing so quickly.

Traffic cops are single-stacks; translators are dual-stacks.

DIALECTS and DIFFERENCES

imageWhen you have exactly the same connection type on both sides of the conversation, a traffic cop is okay. But this is almost never the case, because even when two clients access an application over the generic “WAN”, there are still variances in speed, latency, and client capabilities. Sure, they’re both speaking Chinese,  but they’re both speaking different dialects of Chinese that each have their own nuances and idioms and especial pronunciation that requires just a bit different handling by the infrastructure. Optimizing and accelerating those connections requires careful attention to each individual conversation, and may further require tweaks and tuning on-demand for that specific conversation over and above the generic WAN-focused tweaks and tuning performed to enhance WAN communication.

A dual-stack infrastructure component is an intermediary. It can perform the function of a traffic-cop if that’s all you need but it is almost certainly the case that you need more, because users and partners and integrated applications are accessing your applications from a variety of client-types and a broad set of network connections. Dual-stack infrastructure separates, completely, the client communication from the server-communication, and enables the application and enforcement of policies that enhance security, performance, and availability by adapting in real-time to the conditions that exist peculiar to the client and the application.

Single-stack infrastructure simply cannot adapt to the volatile environment of today’s modern deployment architectures, e.g. cloud computing , highly virtualized, multi-site, and highly distributed. Single-stack infrastructure – whether network or server – are unable to properly optimize that single network stack in a way that can simultaneously serve up applications over WAN and LAN, and do so for both mobile and desktop clients such that both are happy with the performance.

Consider the queues on your web server – that’s where data collects on a per-connection basis, waiting to be transferred to the client. There are only so many queues that can be in use at any given time – it’s part of the capacity equation. The ability of clients to “pull” that data out of queues is directly related to the speed and capacity of their network connection and on the configuration and resources available of their client. If they pull it too slowly, that queue is tied up and resources assigned to it can’t be used by other waiting users. Slow moving queues necessarily decrease the concurrent user and connection capacity of a server (virtual or iron) and the result necessitates more hardware and servers as a means to increase capacity. A single-stack infrastructure really can’t address this common problem well. A dual-stack infrastructure can, by leveraging its buffering capacity to quickly empty those queues and re-use the resources for other connections and users. In the meantime, its doling out the data to the client as quickly or slowly as the client can consume it, with negligible impact on the infrastructure’s resource availability.

Dual-stack infrastructure can be tweaked and tuned and adapts at execution time. It’s agile in its ability to integrate and collaborate with the rest of the infrastructure as well as its ability to apply the right policies at the right time based on conditions that are present right now as opposed to when it was first deployed. It can be a strategic point of control because it intercepts, inspects, and can act based on context to secure and accelerate applications. 


Michael J. Miller asked Gartner: Will Microsoft and VMware Dominate the Cloud Landscape? in this 10/21/2010 post to PC Magazine’s Forward Thining … blog:

One of the more interesting sessions at Gartner Symposium yesterday was on "Cloud Computing: Changing the Vendor Landscape." Gartner Analysts David Cearley and David Mitchell Smith predicted that by 2013, only two vendors will be perceived as leaders in both cloud computing and enterprise computing. Those two vendors are likely to be Microsoft and VMware, they said.

Smith noted that the companies seen today as enterprise computing leaders, such as SAP and Oracle, aren't seen as cloud computing leaders; and cloud leaders, such as Amazon, Salesforce, and Google, aren't seen as enterprise leaders. Over time, they say this will change.

In their view, the cloud computing continuum moves from closed private cloud implementations to full open public ones, with lots of things in between, which include managed private clouds, virtual private clouds, and community private clouds (shared by a few companies).

Slicing Cloud Horizontally.png

They said that cloud services exist in a value chain, and that it will result in more interesting connections between the various vendors. But they were also clear that putting IBM Websphere or an Oracle database on top of an infrastructure platform such as Amazon EC2 is not creating a cloud-based service; it's just another way of hosting applications.

They then went through the major cloud players and talked about their pros and cons. They discussed which ones were providers of cloud services (where companies can buy services) and which were enablers (which create technology, but others create the services); which layers of cloud services (infrastructure, platform, and applications) the companies offer products for; and whether the vendors support public or private clouds, or both. Here's a summary.Cloud Vendor Emphasis.png

They went through each of the vendors, in most cases, pointing out the pros and cons of their offerings and where they fell short of a full solution. Overall, they say only Microsoft and VMware have full lines, although their offerings are very different from each other.

Smith said Microsoft's choices were "insanely complex" as it offered all sorts of products in all sorts of ways. It is an enabler of cloud services within companies, a provider of its own services, and also sells services through third parties. It has products for both public and private clouds, and it offers lots of SaaS applications (some hosted, some really cloud-based, and some moving in the cloud direction), and its Azure products, which offer a hybrid of infrastructure and platform as a service.

Microsoft has "one of the most visionary and complete views of the cloud," Cearley said. In some respects, he said, in a few years, you may think of their enterprise offerings as private versions of their cloud offerings. On the other side, he said, many of the specific offerings aren't fully mature yet. But Smith noted that software moves faster on the cloud.

VMware, on the other hand, is not trying to be a provider -- just an enabler, Smith said. It has been focused on private clouds, but with things like Springsource and Zimbra, it is taking on more public cloud attributes. Smith said most of the company's products are typically not offered "as a service." But he lauded the company's comprehensive strategy focus, and Cearley talked about breadth of enabling technologies and working with lots of providers who will deliver the services.

Overall, they said to look at a number of offerings from both established and up-and-coming vendors; to expect a lot of consolidation in the space; and to look at "cloud service brokerages" to help companies transition to the cloud.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

Laura Smith wrote End-to-end monitoring offers a view from cloud to enterprise as a feature article for the 10/18/2010 issue of the SearchCIO.com blog:

image Imagine the "situation room": IT executives are staring at a wall of screens that depict the real-time performance of applications and transactions. Does this sound like science fiction? It isn't anymore, as a spate of end-to-end monitoring tools makes its way to market.

Announcements this month from such companies as Compuware Corp., Precise Software Solutions Inc. and Microsoft -- along with offerings from Correlsense Ltd., Veeam Software Corp. and Quest Software Inc.'s Vizioncore -- promise promise to make visible network perfor …

Reading the article’s remainder requires site registration.


<Return to section navigation list> 

Cloud Computing Events

The Windows Azure Team suggested What to Watch and How to Stay Connected at PDC10 in this 10/25/2010 post:

imageWith PDC10 right around the corner, we wanted to provide a "cheat sheet" with information to help you get the most out of the event, no matter how you plan to attend! If you're looking to learn the latest about the Windows Azure platform, below is a list of the live sessions you won't want to miss; all sessions will stream live here on the "Cloud Services" channel and will be available on-demand if you miss the live session (all times PDT). 

Thursday, October 28, 2010 

  • 9:00 - 11:00AM: Keynote with Bob Muglia and Steve Ballmer
  • 11:30AM - 12:30PM: "Building Windows phone 7 Applications with the Windows Azure Platform - Steve Marx
  • 11:30AM- 12:30PM: Building, Deploying and Managing Windows Azure Applications - Jim Nakashima
  • 2:00 - 3:00PM: Migrating and Building Apps for Windows Azure - Mohit Srivastava
  • 3:15 - 4:15PM: Composing Applications with AppFabric Services - Karandeep Anand
  • 4:30 - 5:30PM:New Scenarios and Apps with Data in the Cloud - Dave Campbell

Friday, October 29, 2010

  • 9:00 - 10:00AM: Connecting the Cloud & On-Premise Apps with the Windows Azure Platform - Yousef Khalidi
  • 9:00 - 10:00AM: Open in the Cloud:  Windows Azure and Java - Vijay Rajagopalan
  • 10:15 - 11:15AM: Identity & Access Control in the Cloud - Vittorio Bertocci
  • 10:15 - 11:15AM: Windows Azure Storage Deep Dive - Jai Haridas
  • 11:30AM- 12:30PM: Inside Windows Azure - Mark Russinovich
  • 2:00 - 3:00PM: Building Scale-Out Database Solutions on SQL Azure - Len Novik
  • 3:15- - 4:15PM: Building High Performance Web Applications with the Windows Azure Platform - Matthew Kerner

imageIn addition to these live sessions, a variety of cloud-related pre-recorded sessions are also available, to find them, click on Sessions and then scroll down to "Pre-recorded" on the left menu.

Given that the majority of attendees will be participating virtually this year, we want to be sure you feel connected to the action and have the chance to join the conversation.  That's why we've asked several of our resident Windows Azure experts to tweet their impressions and insights into what's happening during the event.  We encourage you to follow them on Twitter to get the latest updates from them and to provide you a way to ask questions about what you're seeing and hearing.  You may already be following some of these folks but we encourage you to follow as many as you can (in addition to @WindowsAzure, of course):

Finally, if you plan to attend PDC10 in person, we invite you to join our Tweetup, which will take place during the attendee party on Thursday night, October 28, 2010 from 6 - 10PM at the Lucky Strike!  We'll have many of our Windows Azure evangelists and experts on-hand, along with many members of our virtual Windows Azure 'community' so please stop by and say hi!  We'll tweet our exact location at the event so please be sure to follow @WindowsAzure on Twitter for updates. 

@WindowsAzure tweeted on 10/25/2010:

Just created a Windows #Azure #PDC10 Twitter list (http://bit.ly/b8H40M). Follow the resident Windows Azure experts from the floor.

See my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post updated 10/22/2010 for a complete list of PDC10 sessions on a single page.


Cory Fowler (@SyntaxC4) posted Presentation Notes from Tech Days 2010 Toronto on 10/25/2010:

image On Wednesday October 26th, I will be presenting the Windows Azure Session at Tech Days 2010 in Toronto, Canada. As a summary to my presentation, I am providing the links to not only the resources for my talk, but some of the concepts that I elude to in my presentation.

Act 1: The Brownfield

image

I can’t wait to see the glowing smiles in the audience when I announce that it is possible to Migrate your legacy applications to the Cloud.

However to take advantage of the benefits of Cloud Infrastructure, some code changes may be necessary. Not to worry, here are a few links that will help you along the way:

Guidance on Migrating Legacy Applications
Act 2: The Greenfield

Whether you’re working on a brand new Application, or retro fitting your existing Application with some new practices there are a few things that you’ll want to be aware of.

Using Storage Services: Making Storage Thread Safe with SyncLock

When using Managed code to access Storage Services, it is a Best Practice to use a Sync Lock to surround the code which is attempting to Create a Blob, Table or Queue. Here’s a small sample from Building your first Windows Azure Application on Channel 9 Windows Azure Platform Training Course.

static bool _isInitialized = false;
static object _gate = new object();
static CloudBlobClient _blobClient;

private void InitializeStorage()
{
if(_isInitialized) return;

lock(_gate)
{
// Initialize Storage
_isInitialized = true;
}
}
Enforcing Access Rights on Blobs

Depending on your application it may be necessary to Set Access Control for Containers.

_blobContainer.SetPermissions(new BlobContainerPermissions()
{
PublicAccess = BlogContainerPublicAccessType.Blob
}); // Set the Blob Container to Allow Public Access to Blobs
Next Steps

Windows Azure Platform Book by Apress   Programming Windows Azure by O'Reilly   Cloud Computing with the Windows Azure Platform - Wrox Publishing     MSDN Magazine

If you’re interested in Getting Started with Windows Azure, These Links can get you going:

Resources

For Best Practices and other Training review these links:


Bruce Kyle announced on 10/25/2010 a 45-minute MSDN Windows Phone 7 Course for iPhone Developers by Bill Loudin:

image If you’re an iPhone developer accustomed to working with Xcode and Interface Builder, check out this video where you’ll find all about the tools for developing Windows Phone 7 apps! You’ll learn the fundamentals of Visual Studio 2010, the Windows Phone emulator and Microsoft Expression Blend for Windows Phone.

Check out Bill Loudin’s 45-minute course on MSDEV, Windows Phone Developer Tools for the iPhone Developer.

imageFor anyone developing for Windows Phone 7, you’ll find additional help and marketing assistance through Microsoft Platform Ready. Sign up today. MPR also offers a rebate of $99 when you put two or more applications into Windows Phone 7 Marketplace. Details of the program will be announced soon.


Nancy Medica announced Webinar: The Windows Azure Platform by Dustin Hicks will take place on 11/17/2010:

image Join us on our next webinar: The Windows Azure Platform

Learn how Windows Azure Architecture and its deployment options can offer you:

  • Benefits for your business
  • How to blend on-site IT with Cloud Compute Capabilities

Presented by Dustin Hicks, Azure Technology Specialist.

imageDustin assists customers with recommendations about Azure architecture. He has 20 years of IT experience as a developer and architect.

When? Wednesday, November 17th, 11:00 AM – 12:00 PM CDT

Intended for: IT Directors, CIOs, CTOs, IT managers, Lead Developers, Application Managers.

Follow the event on Twitter #CSwebinar

Register now!


MSDN announced on 10/25/2010 a five-hour MSDN Simulcast Event: Windows Azure Firestarter (Level 200) to be held on 12/9/2010 at 6:30 AM PST:

Event Overview

imageIs cloud computing still a foggy concept for you? Have you heard of Windows Azure, but aren’t quite sure of how it applies to you and the projects you’re working on? Join your Microsoft Developer Evangelists for this free, all-day event combining presentations and hands-on exercises to demystify the latest disruptive (and over-hyped!) technology and to provide some clarity as to where the cloud and Windows Azure can take you.

6:30 AM Morning Sessions (Pacific Time)

Getting Your Head into the Cloud
Ask ten people to define “Cloud Computing,” and you’ll get a dozen responses. To establish some common ground, we’ll kick off the event by delving into what cloud computing means, not just by presenting an array of acronyms like SaaS and IaaS , but by focusing on the scenarios that cloud computing enables and the opportunities it provides. We’ll use this session to introduce the building blocks of the Windows Azure Platform and set the stage for the two questions most pertinent to you: “how do I take my existing applications to the cloud?” and “how do I design specifically for the cloud?”

Migrating Applications to Windows Azure
How difficult is it to migrate your applications to the cloud? What about designing your applications to be flexible inside and outside of cloud environments? These are common questions, and in this session, we’ll specifically focus on migration strategies and adapting your applications to be “cloud ready.”
We’ll examine how Azure VMs differ from a typical server – covering everything from CPU and memory, to profiling performance, load balancing considerations, and deployment strategies such as dealing with breaking changes in schemas and contracts. We’ll also cover SQL Azure migration strategies and how the forthcoming VM and Admin Roles can aid in migrating to the cloud.

Creating Applications for Windows Azure
Windows Azure enables you to leverage a great deal of your Visual Studio and .NET expertise on an ‘infinitely scalable’ platform, but it’s important to realize the cloud is a different environment from traditional on-premises or hosted applications. Windows Azure provides new capabilities and features – like Azure storage and the AppFabric – that differentiate an application translated to Azure from one built for Azure. We’ll look at many of these platform features and examine tradeoffs in complexity, performance, and costs.

10:00 AM Cloud Play
Enough talk! Bring your laptop or pair with a friend, as we spend the afternoon with our heads (and laptops) in the cloud. Each attendee will receive a two-week “unlimited” Azure account to use during (and after) our instructor-led hands-on lab. During the lab you’ll reinforce the very concepts we discussed in the morning as you develop and deploy a compelling distributed computing application to Windows Azure.

1:00 PM The Silver Lining: Evaluations

Presenters: Brian Hitney, Senior Developer Evangelist, Microsoft Corporation, Peter Laudati, Senior Developer Evangelist, Microsoft Corporation, and Jim O'Neil, Senior Developer Evangelist, Microsoft Corporation

View other sessions from: Simulcasts: Live events on the latest technologies

If you have questions or feedback, contact us.

Registration Options

Event ID:
1032464261

Register without a Windows Live™ ID

Register Online


Markus Klems announced on 10/27/2010 a BrightTALK™ Cloud Infrastructure Summit on October 27th:

image The BrightTALK™ Cloud Infrastructure Summit on October 27th … will feature a line up of industry experts including:

  • Roger Bearpark, Assistant Head of ICT within Hillingdon;
  • Peter Meinen, CTO at Fujitsu
  • Bob Tarzey, Analyst and Director for Quocirca
  • Peter Judge, Editor for eWeek Europe UK;
  • David Lucas, CSO at Global Computer Enterprise
  • Ian Osborne, Director of the Digital Systems Knowledge Transfer Network at Intellect Technology Association

Nicole Hemsoth reported on 10/25/2010 Harvard to Present “Hands On” Cloud Course in a post to the HPC in the Cloud blog:

image A number of universities have been offering a range of virtualization-centered courses, both for their graduate and undergraduate students but more are reaching out to the business and admin community to build attendance.

Just recently, Harvard University announced that it too will be providing a “hands on” learning opportunity using Amazon’s EC2 on its campus beginning in early January.  

The seminar will address migrating one’s infrastructure to the cloud in addition to discussing how to virtualize local infrastructure using a range of proprietary and open source software. Other goals include describing in detail the technical aspects of virtualization and what it means in terms of scalability, cost, and performance, thus providing a springboard for attendees to determine if the cloud makes sense for their own specific infrastructure and needs.

image Despite the fact that there are some basics covered, the class is not necessarily a “beginner’s guide to cloud” but instead is hoping to find CTOs and IT managers, system administrators and instructors who want to start using the cloud to deploy a range of large technical computing projects. The course’s organizers put it simply, “to participate, you should be comfortable with command-line environments.”

The hands-on cloud course will be taught by Dr. David Malan, who teaches courses in Harvard’s Computer Science Department as well as other classes in the School of Engineering. In addition to his teaching and research work that is focused on pattern detection within large datasets, Malan is also the founder of startups, including Diskaster. …

Nicole continues with a Q&A interview with David Malan.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

David Linthicum claimed “The recent announcement that Amazon Web Services will provide free entry-level usage of EC2 will ensure its success” as a deck for his Amazon's new AWS bet: You can't beat free post of 10/26/2010:

image In a recent announcement, Amazon.com decided to provide one year of access to a "micro-instance" on its EC2 (Elastic Compute Cloud) and a few other Amazon Web Services (AWS) products at no charge for new users. "The offer includes 750 hours' use per month of a Linux micro-instance, including 613MB of memory as well as 32- and 64-bit support, along with 750 hours' monthly use of the Elastic Load Balancer tool and 15GB worth of data processing," the company said. In addition, Amazon.com announced a no-charge usage tier for its SimpleDB tool, Simple Queue Service, and Simple Notification Service.

imageThat's a good hunk of the cloud for free, if you ask me, and the right thing for Amazon.com to do now as cloud providers are looking to grab market share. This action will pay huge dividends in the near future, considering that a few four-figure footholds in an enterprise this year could mean six- to seven-figure deals within two years.

image Although cloud computing is known to be cheap, the fact that somebody has to generate a purchase order or expense an infrastructure cloud computing invoice has kept many potential cloud users on the fence. The challenge is keeping these costs off the radar of corporate accounting until a project is up and running; thus, you're able to prove the value as the costs show up in weekly reports. Doing that in reverse could be career-ending action at many organizations, where cloud computing is a political football and costs are tightly controlled.

This try-it-for-free move will clearly create additional rogue clouds (unsanctioned cloud computing projects), as the initial prototyping costs are nada. Applications that solve business problems will be adopted quickly, no matter how they were created or whether they were blessed by corporate IT. Amazon.com will penetrate deeper into enterprises using this strategy; before you know it, the use of the Amazon Web Services cloud will be much more pervasive.

I would classify an AWS micro-instance as a “hood hunk of the cloud,” whether free or not.


Daniel Berringer asks and answers What the heck is an ECU? in this 10/18/2010 post:

image Amazon’s ability to leave the price of the original single ECU
instance unchanged in the four years since the launch of EC2 suggests
they missed the Moore’s Law memo.  In particular, Amazon’s success
owes to the invention of the ECU as a new measure of compute capacity that clouds (pun intended) competitive comparisons.

imageAmazon’s Definition of ECU at http://aws.amazon.com/ec2/instance-types/:
“We use several benchmarks and tests to manage the consistency and predictability of the performance of an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation”

Setting aside the fact AMD did not sell a 1.0-1.2 GHZ Opteron in 2007,
Amazon’s definition falls short of the making the ECU measurable. A
“metric” that defies measurement might have been Amazon’s intention,
but it creates problems for everyone else.

Benchmarks like those run by Jason Read at cloudharmony.com show
inconsistency in Amazon’s application of the ECU with the 4 ECU High
Memory instances performing better than the 5 ECU High Compute
instance, and, similarly, the 6.5 ECU High Memory performing better
than the 8 ECU Standard Large instance.  Amazon may have internal
benchmarks that do not show these discrepancies, but Amazon has so far
decided not to let its customers in on the nature of these benchmarks.

goCipher created infrastructure as a service instances
<link:http://www.domaingurus.com/ec2> replicating several of the
Amazon instances. goCipher’s adoption of Amazon instance types and
ECU’s follows the successful example of the competitive PC industry
adopting the IBM PC architecture.  There exist no uncertainty about
the meaning of a GB of memory, a TB of storage, or a TB of bandwidth.
Establishing a consensus measure of ECU represents the final piece of
the puzzle.

Gordon Moore identified the doubling of transistors in a processor
every 18 months.  Moore’s Law does not necessarily apply to all the
components that go into a computer, the computer itself, or a cloud
computing offer.  For example, the relatively slower decrease of DRAM
at approximately 30% per year means memory consumes an increasing
portion of the total costs.  The price performance improvements of
storage tend to exceed Moore’s Law.   There exists no Moore’s Law
equivalent for operating systems or software.

Reasonable people can come to different conclusions about the expected
pace of price performance improvements, but the immunity of Amazon
instance types to Moore’s Law over a four year period is not business
as usual.  A reliable means of comparing cloud computing offers needs
to emerge for nascent cloud computing industry to become less chaotic.
GoCipher is the first company to directly offer Amazon instance
types, but we believe everyone in the emerging ecosystem would be
better served by quoting offers in terms of ECU’s.

GoCipher’s Cloud Price Calculator is here.


James Staten (@Staten7) asserted Free isn't the half of it. AWS pushes cloud economics further in this 10/25/2010 post to his Forrester blog:

image This week Amazon Web Services announced a new pricing tier for its Elastic Compute Cloud (EC2) service and in doing so has differentiated its offering even further. At first blush the free tier sounds like a free trial, which isn't anything new in cloud computing. True, the free tier is time-limited, but you get 12 months, and capacity limited, along multiple dimensions. But it's also a new pricing band. And for three of its services, SimpleDB, Simple Queueing Service (SQS), and Simple Notification Service (SNS) the free tier is indefinite. Look for Amazon to lift the 12 month limit on this service next October, because the free tier will drive revenues for AWS long term. Here's why:

image A few weeks back I posted a story about how one of our clients has been turning cloud economics to their advantage by flipping the concept of capacity planning on its head. Their strategy was to concentrate not on how much capacity they would need when their application got hot, but on how they could reduce its capacity footprint when it wasn't. As small as they could get it, they couldn't shrink it to the point where they incurred no cost at all, they were left with at least a storage and a caching bill. Now with the free tier, they can achieve a no-cost footprint.

image By creating the free tier, Amazon has essentially created a new incentive for thinking small. And free is sticky. When deciding which public cloud platform you want to be hosted on, how can you argue with one that costs you nothing when traffic is low?

Before you jump to the conclusion that the free tier is just a promotion and loss leader for Amazon and thus they will never make it permanent, let me point out that the free tier will actually make money for AWS. It does this in two ways. First, if you know you can shrink your application down to the free tier, you have less incentive to switch platforms and that means AWS can count on you as a customer and can count on revenue when you're busy. The free tier is teeny, tiny.

Second, Amazon is a master at the game of Tetris and this is the game of profitability in Infrastructure as a Service (IaaS) and server virtualization. In the world of virtual hosting, there is a percent of sustained average utilization of the infrastructure where you cross over from red to black. At only 20% utilized, your virtual environment might be losing money as your on-going operating costs are higher than the revenue from hosting. As you ratchet up the utilization you hit a point where the revenue takes over. Let's call that 60% as our research has shown this to be roughly the break-even point for an average IaaS environment. The number varies based on the efficiencies of your operations, the cost of your infrastructure and your hosting costs. Raise the utilization higher than this cross-over point and each new VM you host is pure profit. So your number one objective as an IaaS business is to keep the utilization above this line. If you are a traditional VM hosting business with 12 month contracts you don't have to worry about this because you set your VM hosting prices above this cross-over point, get customers to commit to a utilization level, assign the resources and work to fill up the next box.

In the IaaS business, you don't have long term contracts so you need pricing that incents users onto the platform and forecasting models that ensure you build the right sized environment for the expected demand. Get it wrong: lose money. Other that playing with pricing - because competition makes this difficult - how else can you play this game to win? This is where Tetris comes in. If you have a bunch of large and extra large VMs, you can fill up a box pretty quick but if they go away you gotta find replacements, otherwise you go negative on that system. Since the break even point isn't a full system its one much less than full, you can hedge against the loss of an extra large instance going away by filling up the box with a bunch of smaller instances. And the more you incent customers to use small instances the more small blocks you have to fill up systems - taking you further above the cross-over point. And as we mentioned, everything above the cross-over line is pure profit so you aren't hurting profitability if some of the smalls are free, you're just impacting short term margins.

If you want to play this game to win you can bring three new types of pieces to play:

* Reserved instances - blocks (and revenue) you can count on long term

* Small, cheap instances - and lots of them so you never have an empty hole in a row of the Tetris bucket

* Spot instances - so you can fill boxes even higher and get paid to do this, then raise the prices (profits) and kick blocks out that are impeding reoptimization of the pool

Funny how Amazon has just such pieces today. No wonder they continue to lead the IaaS market.

Are you taking advantage of cloud economics yet? If you aren't, you're running out of excuses.

Related Forrester Research

James serves Infrastructure & Operations Professionals at Forrester Research.


Panagiotis Kefalidis (@pkefal) posted his Amazon Web Services free tier analysis on 10/25/2010:

image Amazon announced the AWS Free Usage Tier (http://aws.amazon.com/free/) last week, which will start from November the 1st. I know some people are excited about this announcement and so am I because I believe that competition between cloud providers always brings better service for the customer, but in Amazon's case, it's more like a marketing trick than a real benefit and I'll explain why during this post. Let me remind you at this point that this is strictly a personal opinion. Let me also say that I have experience on AWS too.

image

Certainly, having something free to start with is always nice, but what exactly is free and how does it compare to Windows Azure platform? First of all, Windows Azure has a similar free startup offer, called Introductory special which gives you free compute hours, storage space and transactions, a SQL Azure web instance, AppFabric connections and ACL transactions, free traffic (inbound and outbound), all at some limit of course. Then there is the BizSpark program, which gives you also a very generous package of Windows Azure Platform benefits to start developing on and of course let's not forget the MSDN Subscription Windows Azure offer which is even more buffed up than the others.

imageOk, I promised the Amazon part, so here it is. AWS billing model is different from Windows Azure. It's very detailed, a lot of things are broken into smaller pieces, each one of them being billed in a different way. Some facts:

  • Load balancing in EC2 instances, it's not free. Not only you pay compute hours but you're also charged for traffic (GB) that went through your balancer. Windows Azure load balancing is just there and it just works and of course you don't pay compute hours and traffic just for that.
  • On EBS you're charged for every read and write you do (I/O), charged for the amount of space you use, snapshot size counts not in the total but on its own and you're also charged per snapshot operation (Get or Put). On Windows Azure Storage you have 2 things. Transactions and amount of space you consume. Also on snapshots only your DELTA (differences) is counted against your total, not the whole snapshot.
  • SimpleDB is charged per machine hour* consumed and GBs of storage. Windows Azure Tables you only pay your storage and transactions. You might say that I have to compare this to S3, but I don't agree. S3 is not close to Windows Azure Tables as SimpleDB is. What is even more disturbing on S3 is the fact that there is a durability guarantee of 99.99% which actually means you can lose (!!) data of 0.01%.
  • There is no RDS instance (based on MySQL) included in the free tier. With introductory special you get a SQL Azure Web database (1GB) for 3 months or for as long as you have a valid subscription when you're using the MSDN Windows Azure Offer where you actually get 3 databases.

For me, the biggest difference is the development experience. Windows Azure offers a precise local emulation of the cloud environment on your development machine, called DevFabric which ships with Windows Azure Tools for VS2008/VS2010. All you have to do, is click F5 on your Cloud project and you get local emulation on your machine, to test, debug and prepare for deployment. Amazon doesn't offer this kind of development environment. There is integration with Eclipse and other IDEs but every time you hit the Debug button, you're actually hitting real web services with your credentials, consuming from your free tier and as soon as you're done consuming that you start paying to develop and debug. Free tier is more like a "development tier" for me. Windows Azure offers you both, the development experience you expect without any cost on your local machine with DevFabric and a development experience on the real cloud environment where you can deploy and test  your application also without any cost, unless of course you consume your free allowance.

One last thing is that this offer is only available to new accounts starting from November the 1st, punishing old accounts to the old model which actually costs more money. Sooner or later Amazon will have to give a way for old accounts to take advantage of this offer, otherwise it's pretty much certain that a lot of them are going to be orphaned. People will create new ones to benefit from the offer and of course the main reason that will drive this behavior is cost.

Some may say you can't compare AWS to Windows Azure, because they are not the same. AWS is mostly IaaS (Infrastructure as a Service) and Windows Azure is PaaS (Platform as a Service) and I couldn't agree more. But what I'm comparing here are features that already exist on both services. I'm not comparing EC2 instances sizes to Windows Azure instances sizes but I'm comparing the Load Balancing, SimpleDB etc.

* Machine hour is a different concept to compute hour and it's beyond the scope of this post.


Randy Bias (@RandyBias) posted @Cloudscaling CEO @randybias on #VMworld & #Interop 2010 on 10/25/2010:

image During my most recent trip I was speaking at both VMworld Europe 2010 and Interop NYC 2010 – Enterprise Cloud Summit. This update attempts to provide a candid look at some of the trends, thoughts, and insights that occurred to me while engaging with customers, vendors, and the greater cloud community at these two events.

Here I will briefly cover the following points:

  • Disconnects in Telco/SP Cloud Strategy
  • ‘Hybrid Cloud’ Still Causing Confusion
  • Public Cloud Hits a Tipping Point?
  • Enterprise IT Governance

Telco/SP Enterprise Cloud Strategy Doomed to Failure?
How many of the large telecommunications and service providers have ‘enterprise’ cloud strategies? Their basic strategy boils down to:

  1. Deploy an ‘enterprise cloud’, usually with VMware
  2. Farm current customer base for cloud customers
  3. Create a suite of services around the cloud offering that make it compelling

In talking to folks at VMware and various telcos and service providers, I heard a tremendous amount of focus on #3.  Many times I heard that “infrastructure is just a commodity” and “we’ll compete by providing value-added services.”  This is clearly a sound strategy, except that most of these providers are not building commodity infrastructure.  Most enterprise clouds have a very expensive cost-basis for their build-outs.  The most stark difference can be seen with VCE Vblocks clouds, which are almost 10x the cost of a commodity cloud[1].

Taking aside a number of the other issues with these enterprise clouds[2], how can you sell enough value-added services on top of a 10x more expensive infrastructure solution to make up the difference?  The answer is that you can’t and most telcos and SPs with this strategy will eventually have to face the math.

Related to this, #2 above seems to have telcos and SPs focusing on protecting existing customers.  How much more failed can a cloud strategy be than if it’s defensive?  The only good strategy here in response to the Amazon juggernaught is a frontal assault using those assets and capabilities you have and Amazon does not.

It’s interesting that even VMware, probably Amazon’s key competitor, is preaching along with EMC, a ‘Journey to the Private Cloud’ and pounding the pulpit for enabling developers.  This quote from Paul Maritz, CEO of VMware is choice:


In the final analysis they [purchasers] are not the people making strategic decisions in the business. Developers have always been at the leading edge, because that’s where business value is generated. Things that don’t differentiate you at a higher level will be SaaS apps – which will also be purchased at a higher level. The differentiated stuff you have to do yourself, and that means software development”.


Most of the current VMware-based enterprise clouds are simply trying to sell a pure virtualization outsourcing play to IT admins of their existing customer base.  If a developer friendly strategy is working for Amazon and VMware is pushing a similar vision, then it’s incumbent on those who want to be successful in the emerging cloud computing space to think about where developer enablement fits in their strategy.

I very much hope that telcos and SPs will start to develop some strategies and cloud solutions that are ultimately competitive.  The worst thing that can happen here is to have a GOOG/AMZN duopoly. (Please see my earlier post on the rumor of Google launching their own EC2-like service.)

‘Hybrid Cloud’ Confusion
In a panel at Interop on adoption of public clouds by enterprise customers there was a heated debate about the meaning of ‘hybrid cloud’.  This debate, mostly between myself and Paddy Srinivasan of Cumulux, was helpful for attendees, although as in most conversations of this type there is danger of devolving into an argument on semantics.  I made some pretty strong assertions about the general lack of usefulness in any context of the term ‘hybrid cloud’[3].  Essentially I simply reprised my posting from February this year: Hybrid Clouds are Half-baked.

Why stick on this?  For me, this is a question of straight talk.  We all live with the confusion of ‘cloud’ every day in our work, but when vendors use the term as something new to denote simple Service-Oriented Architecture (SOA) or the joining of two clouds (aka ‘cloud bridging’), that muddies the conversation further. The arbitrary creation of fuzzy marketing terms and pretending as if they have meaning does a disservice to all those who are trying to understand how to move forward in this new world.  Even the Wikipedia entry for Hybrid Cloud is a mess.

Another key reason to push on avoiding this term is that it ‘over promises’ on cloud.  Most companies don’t need a hybrid solution at the moment.  Certainly, some services, like identity management and authentication need to ‘bridge’ the firewall, but a single app doesn’t need to exist in both places nor does it need to move back and forth.  Neither do virtual machines (VM).  In fact, if you are following best practices using tools like libcloud, fog, or jclouds to manage instances and Chef and Puppet to package your app deployments, then you can deploy to any cloud on demand.  This approach makes far more sense than trying to move large VMs back and forth across wide area networks.

It’s just like buying two Internet connections from two separate ISPs, which certainly isn’t a ‘hybrid network’.  Using multiple clouds is a best practice enabled by proper tooling that increases portability & interoperability while reducing risk, not some kind of ‘hybrid’.

Public Cloud Hitting the Big Time?
Just before the enterprise public cloud adoption panel Brian Butte, the moderator, informed me that a poll of the audience had found that 95% of the enterprise attendees were using public clouds or planning on it.  A stark turn around from the beginning of the year when most were focused on private cloud development.

This parallels our experience that most enterprises will ‘fail forward’ trying to build private clouds.  By fail forward, we mean here that IT departments will attempt to deploy highly automated virtualization systems thinking they are private clouds, but not hitting the mark.  As Nick Carr pointed out earlier in Does IT Matter:


Of the more than eight thousand systems projects Standish examined, only 16 percent were considered successes—completed on time and on budget and fulfilling the original specifications. Nearly a third were canceled outright, and the remainder all went over budget, off schedule, and out-of-spec. Large companies—those with more than $500 million in annual sales—did even worse than the average: Only 9 percent of their IT projects succeeded. [ed. emphasis mine]


As I said in my Interop keynote presentation on Monday for the private cloud track of the Enterprise Cloud Summit, it’s not a real private cloud unless it’s built like a public cloud.  Most enterprise IT folks have little idea how to move forward, either culturally or technologically in building a true private cloud.  How much worse is the 9% success rate likely to be in this case?

So what we’re probably seeing now is that enough early attempts at private cloud have failed or have been so slow that business unit owners are pushing for public cloud solutions and demanding immediate success.

Enterprise IT as Governor, not Control-Freak
I hit on this point repeatedly, but I also heard it from a number of other folks.  We’re clearly moving into a world of mixed IT capacity.  Some will be onsite and much will eventually be offsite and run by a multiplicity of cloud vendors; infrastructure, platforms, and applications.  In this new world, it’s more important for enterprise IT to provide governance rather than direct control.  This is very similar to how modern manufacturing or facilities management works.

Apple, Inc, for example, does not manufacture it’s hardware.  Instead, this key capability is outsourced, yet Apple has become an expert in managing a large extended supply chain of vendors and ultimate responsibility for delivering high quality hardware goods.

Similar to the process of managing global manufacturing relationships, I predict enterprise IT will shift to spend more time governing a large supply chain, not running each individual solution themselves.

Wrap-Up
All of this just further reinforces my thinking about Cloudscaling’s general approach to the market place.  We want to help telcos and service providers compete with AMZN/GOOG, while helping enterprises to understand how to embrace and manage the transition to cloud computing.  We think this means that telcos and service providers have inherent advantages that AMZN/GOOG can’t compete with.  Unfortunately, when these advantages are coupled with expensive ‘enterprise cloud’ infrastructure, much of the potential competitive opportunity is lost.

Imagine instead, that the inherent advantages of a large telco, such as geographical dominance, access to wireless networks, advanced networking capabilities such as MPLS networks, cheap IP backbones, were coupled with a cost-competitive Amazon EC2-like cloud infrastructure.  This ‘consumer cloud’ infrastructure combined with the telco’s natural advantages makes for a formidable competitive advantage in picking up all of the new cloud apps, particularly those that service mobile device developers.

When looking at how the marketplace appears to be unfolding, it seems clear that enterprises will continue to be confused with hype and promises such as that of ‘hybrid clouds’, miss the boat on delivering in the short term, pushing IT departments into adopting public cloud solutions whether they like it or not.  It also seems clear that servicing these new cloud apps is also critical.

To us, this means it’s important to have *both* an enterprise cloud to capture and retain existing enterprise customers while also deploying a low-cost commodity cloud, built like Amazon and Google, targeted at consumers, developers, and new cloud applications.  Particularly those apps that drive the burgeoning smart phone market and play to any telcos inherent strengths.


[1] I have a detailed analysis in the pipeline and I will be showing some of these numbers in upcoming presentations at conferences as well.
[2] We’ll cover this in much more depth in the near future.
[3] This will likely cause some negative feedback from legacy enterprise vendors whose marketing folks use this term extravagantly.


The HPC in the Cloud blog reported OpenNebula Project Releases Version 2.0 of its Open Source Toolkit for Cloud Computing on 10/25/2010:

imageThe OpenNebula Project announced today a major new release of its OpenNebula Toolkit, a fully open source cloud computing tool for managing a data center's virtual infrastructure. The toolkit includes features for integration, management, scalability, security and accounting that many enterprise IT shops need for private and hybrid cloud adoption. This newest release also emphasizes standardization, interoperability and portability, providing cloud users and administrators with a choice of several popular cloud interfaces and hypervisors, and a flexible architecture that can accommodate practically any hardware and software combination in a data center.

"This new version has matured thanks to an active and engaged community," said Ignacio M. Llorente, co-lead and Director of the OpenNebula Project. "By being fully open-source and having a flexible and extensible architecture, several users have been able to contribute innovative new features and many others have provided valuable feedback that allowed us to focus on features that were truly of interest to our users."

OpenNebula is downloaded several thousands of times per month from its website, and is widely used across multiple industries, such as Hosting, Telecom, HPC, or eGovernment.

"OpenNebula is the result of many years of research and the interaction with some of the major players in the Cloud arena. From the beginning, OpenNebula has been designed to be flexible enough to adapt to any infrastructure environment and to scale to hundreds of thousands of virtual machines and cores," said Ruben S. Montero, co-lead and Chief Architect of the OpenNebula Project.

About the OpenNebula Project

image OpenNebula is an open-source project aimed at building the industry standard open source cloud computing tool for managing distributed data center infrastructures. OpenNebula was first established as a research project in 2005 and made its first public release in March 2008. OpenNebula is being used as an open platform for innovation in several flagship international projects to research the challenges that arise in cloud management, and also as production-ready tool in both academia and industry. C12G Labs provides value-added professional services to create solutions, products and services based on OpenNebula.

For more info: http://www.OpenNebula.org


<Return to section navigation list> 

1 comments:

Clement Yuan said...

SQL Azure is a SQL Server in the cloud. It's really powerful. SQL Azure is comes with firewall. This services save my time. I never use SQL Azure before. After reading this article, I must setup a SQL Azure. Because SQL Azure isalways a cluster database. I don't would any server issues.