Tuesday, April 26, 2011

Windows Azure and Cloud Computing Posts for 4/25/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

• Updated 4/26/2010 for new articles marked by Fernando Garcia Loera, Wayne Citrin, Mary Jo Foley, Bruce Kyle and Me.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

My 50-minute, on-demand Linking Access tables to on-premise SQL Server 2008 R2 Express or SQL Azure in the cloud Webcast for Que Publishing is live as of 4/26/2011. From the description:

My PhotoMicrosoft SQL Server has been the preferred back-end database for heavy-duty, multiuser Access applications since Access 2000 introduced the capability to link to the Microsoft Data Engine (MSDE). Microsoft Access 2007 and 2010 don’t support user- and group-level security for new multi-user database projects, so it’s more important than ever to use the SQL Server Migration Assistant (SSMA) for Access to move to SQL Server databases, which maximize data accessibility, reliability and security.

image If you don’t want the burden of maintaining and backing up SQL Server databases, SSMA lets you migrate directly to remote SQL Azure instances running in Microsoft data centers.

In this Webcast, you’ll learn how to:

  • Download and install no-charge SQL Server 2008 R2 Express and SQL Server 2008 R2 Management Studio (SSMS) Express to manage on-premises database back ends
  • Download and install the SQL Server team’s free SSMA, which substitutes for Access’s Upsizing Wizard in linking scenarios
  • Use the SSMA to link existing Access queries, forms, and reports to a SQL Server 2008 R2 Express database.
  • Establish security with user logins and database roles you create in SSMS.
  • Obtain a 30-day free trial subscription to a 1-GB SQL Azure Web database that you can convert to a $9.99 per month paid subscription
  • Use SSMA to create a SQL Azure server with a trial subscription.
  • Use the SSMA to link existing Access queries, forms, and reports to the trial SQL Azure database running in a Microsoft data center.
  • Use SSMS to add security with SQL Azure user logins and database roles.

imageYou can download Northwind.mdb for upsizing to SQL Server and SQL Azure from my Windows Live Skydrive instance here.

You also can download NwindAzure.mdb, which is a Access front-end for tables linked by a publically accessible Northwind SQL Azure database, here. Running NwindAzure.mdb will open a connection to the Northwind database running in Microsoft’s South Centeral US (San Antonio, TX) data center.

Register here.


Jonathan Rozenblit described Deploying a Simple Cloud App: Part 6 - Taking Down the Application so your credit card won’t be billed in a 4/25/2011 post:

image If you’ve started reading from this post, you’ll need to go through the previous parts of this series before starting this one:
Introduction
Part 1: Provisioning and Configuring SQL Azure
Part 2: Provisioning a Storage Account
Part 3: Configuring the Service Package
Part 4: Configuring the Hosted Service, Deploying the Package, and Testing
Part 5: Promoting from Staging to Production

Deploying-a-Simple-Application_thumb[3]Since this deployment is for learning purposes only, we need to make sure that we take down the deployment so as to make sure that your credit card won’t be billed.

Dropping the SQL Azure Database

  1. From the Windows Azure Management Portal, click on Database in the left hand navigation.
  2. In the left hand top navigation, expand the subscription under which you created the SQL Azure database server and database.
  3. Expand the SQL Azure database server and highlight the NerdDinner database.
  4. Click Drop from the ribbon.
    image
  5. You’ll be asked if you want to drop the database. Click Drop.

The database has now been dropped.

IMPORTANT: Once a database is dropped, it can’t be restored. In a real scenario, make sure that the database you select to drop is, in fact, the database you want to drop.

Dropping the SQL Azure Server

  1. Select the server node in the left hand top navigation.
  2. Click Drop from the top toolbar.
    image
  3. You’ll be asked if you want to drop the server. Click Drop.

The database server has now been dropped.

IMPORTANT: Once a database server is dropped, it can’t be restored. In a real scenario, make sure that the server you select to drop is, in fact, the server you want to drop.

Deleting the Storage Account

  1. Click on Hosted Services, Storage Accounts & CDN from the left hand bottom navigation.
  2. Click on Storage Accounts (X) (where X is the number of storage account that you have provisioned) from the left hand top navigation.
  3. From the right hand list, expand the subscription under which you provisioned the storage account.
    image
  4. Click on the account you provisioned earlier.
  5. Click Delete Storage from the top ribbon.
  6. You’ll be asked if you’re sure you want to delete the storage service. Click Yes.

The storage account has now been deleted.

IMPORTANT: Once a storage account is deleted, the account and everything stored in it is deleted and it can’t be restored. In a real scenario, make sure that the account you select to delete is, in fact, the account you wanted to delete.

Deleting the Hosted Service

  1. Click on Hosted Services, Storage Accounts & CDN from the left hand bottom navigation.
  2. Expand the subscription under which you created the hosted service.
  3. Expand the previously created hosted service.
  4. Click on the row that specifies Deployment as the type.
  5. Click Stop in the ribbon.
    image
    After a few moments, the list will refresh indicating that the deployment has been stopped.
    image

    NOTE: From a billing perspective, even though your deployment is stopped (i.e. no one can access the hosted service), you’ll still be billed for the compute hours. This is because your deployment is still consuming resources on the server to which it was deployed.

  6. With the deployment still highlighted, click Delete from the ribbon.
    image
  7. You’ll be asked if you want to delete the deployment. Click Yes.

    IMPORTANT: Once a deployment is deleted, it can’t be restored. In a real scenario, make sure that the deployment you select to delete is, in fact, the deployment you want to delete.

    After a few moments, you will see that the deployment has been deleted.
    image

  8. Click on the row that specifies Hosted Service as the type.
  9. Click Delete from the toolbar.
  10. You’ll be asked if you want to delete the hosted service. Click Yes. After a few moments, you will see that the hosted service has been deleted.
    image

With that, you’ve now removed all of the resources that you allocated during the setup the environment.

Congratulations!

You have successfully set up staging and production environments, deployed an application to the Cloud, and then decommissioned those environments when they were no longer required. Let’s take a step back for a moment and reflect on what we’ve done here and the ease with which we did it. We’ve proven why Cloud computing works and how IT Pros, such as yourself, now have an infinite platform on which to deploy solutions that deliver on business opportunities without the constraints of physical infrastructure and geographic location.

Comments and Feedback

Take a moment to share what you thought of the walkthrough, what you’ve learned, and what next steps you’ll take on your journey to the Cloud in this LinkedIn group discussion. I’ll be reading through your responses and taking your feedback as input for next walkthrough and series of events that we’ll do together. We’ll go deeper into Windows Azure concepts and explore further.


Steve Yi reminded readers about a Technet Wiki: Handling Transactions in SQL Azure article in a 4/25/2011 post:

image Just like SQL Server, SQL Azure fully supports local transactions. TechNet has released an interesting wiki article more about how SQL Azure handles transactions, as well as describing some of the basics. I would suggest taking a look, just to get more up to date on what capabilities SQL Azure has to offer.

Click here for the article [of 10/22/2010 by Walter Wayne Berry.]

imageLet me know if you would like more article highlights like this in the future. We love hearing from our readers about what topics they want to see next.


<Return to section navigation list> 

MarketPlace DataMarket and OData

The Microsoft OData Team announced Terms of Use for the Open Data Protocol Service Validation Tool with a new Wiki page on 4/25/2011:

imageThe Open Data Protocol (OData) is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. OData does this by applying and building upon Web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores.

OData is released under the Microsoft Open Specification promise. This allows anyone to create OData services that implement the specification and to freely interoperate with OData implementations. Currently there are several OData service producers and server libraries including .Net Framework, Java, Rails and several client libraries across a range of platforms such as Objective-C, Javascript, PHP, and Java. The fact that an OData service can be consumed by a wide range of applications and libraries makes interoperability a key requirement. [Emphasis added.]

The goal of this tool is to enable OData service authors to validate their implementation against the OData specification to ensure the service interoperates well with any OData client.

Consumers of the OData protocol can also benefit from this tool by testing OData service implementations that they are building an experience for to pinpoint potential issues.

This tool is released under OData.org Terms of Use.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Scott Densmore reported Local Windows Azure Toolkit >= 1.3 + Network Proxy + Local Deploy = Demystified in a 4/23/2011 post:

I was working on updating the TailSpin application to use Windows Azure Access Control Service (ACS) this week and ran into a small problem.  I am using a WebClient in the controller to call back to ACS to get a list of the Identity Providers. While testing the implementation, I would constantly get a 500 error when running in the local dev fabric. When I would run the web site in IIS it would work just fine. This was driving me insane. I spent a few hours "boogling" (bing + google) around looking for an answer. I could not find anything.

image722322222There was a good reason for this. Most of the rest of the world doesn't work behind the Microsoft Corporate Network. The problem comes down to the Identity of the AppPool trying to get through our proxy. I had thought that this was network (meaning Microsoft Network) problem, yet I couldn't remember exactly where I had seen this before. Luckily, I work with some awesome people and Wade Wegner reminded me of the post he did that talks about just this problem. You can either remove the <Sites> elements in your service definition or follow Wade's post and change your startup to change the AppPool Identity for the project.

In the end I just removed the service definition <Sites> element to move me back to the hostable web core. I can then change this back with my build that I already use to adjust config when I deploy. In university, I would write things down so it would help me remember important information, I am hoping this does the same.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

IT Pro India suggested “Set up a VPN tunnel between Windows Azure and on-premise resources with this Windows Azure Virtual Network feature” as a deck for its Windows Azure Connect: A virtual platform by Microsoft article of 4/25/2011:

image Azure Connect is a new network functionality added by Microsoft for establishing IP-based network connectivity between on-premises and Windows Azure resources. It offers a simple and easy-to-manage mechanism for organizations migrating their existing apps to the cloud.

imageBut the new utility seems to lack functionality, which may cause its adoption ratio to decline with time. Windows Azure Connect enables developers to establish direct connectivity to their cloud-hosted virtual machines, enabling remote administration and troubleshooting with the same set of tools utilized for on-premises applications.

Until recently, Microsoft’s position in the cloud seemed really grim but with the advent of Software-as-a-Service (SaaS) products including Office 365 and Platform as a Service (PaaS) offerings such as Windows Azure, it managed to retain its market value for consumers, small businesses and enterprise customers alike.

But not all apps can be moved to the cloud in entirety. Sometimes, it makes sense to keep on-premise data while migrating compute operations to the cloud. In other instances, building a data hub with multiple business partners is relevant by connecting to a data source in cloud but with applications components in a variety of locations.

Microsoft is planning to offer the pre-released version of Azure Connect during the first half of 2011. Whilst setup is relatively simple with no coding, Azure Connect relies on each server connecting to Azure resources for establishing IPsec connectivity.

Once the agent installed, the server automatically registers itself with the Azure Connect relay in the cloud and network policies are defined to manage connectivity.

Read more: 2, 3


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Bruce Kyle (left) posted on 4/26/2011 a 00:09":29 video interview with Dave White (right) that described How Quark Promote Hosts Multiple Tenants on Windows Azure:

image

imageQuark Promote offers small businesses ways to create a unique, professional image with top-quality, printed marketing materials. Quark's Vice President of Emerging Technologies explains to ISV Architect Evangelist Bruce Kyle how his team implemented a multi-tenant offering using Windows Azure. He details the design decisions and the lessons learned by moving to Windows Azure and SQL Azure.

Each print vendor can receive a customized Web presence. Dave explains how Quark achieves economies of scale and is able to grow their business as volume increases. He explains the technical and business decisions made by Quark to move to the cloud.

Dave explains the steps issues the team confronted, the architectural decisions they made, and how they worked closely with consulting firm Neudesic to accomplish their goal.

About Quark

Quark introduced its flagship software — QuarkXPress — in 1987, delivering precision typography, layout, and color control to the desktop computer to change the way people publish around the world.

Quark Promote extends more than 30 years of expertise in design and publishing to give owners and employees of small and mid-sized businesses an easy and professional way to affordably promote their products and services. Moreover, through QuarkPromote.com, we are connecting an entire community of publishing and design experts with small and mid-sized businesses for mutual success.

For other videos about Quark, see:

Other ISV Videos

For videos on Windows Azure Platform, see:

• Wayne Citrin claims Microsoft’s Java Tools for Azure don’t work in his Java in the Azure Cloud post to the JNBridge blog of 4/26/2011:

image Microsoft has been promoting the use of Java in the Azure cloud, and has been providing lots of material showing how it’s done. They’ve also been distributing Java tools for Azure, including an SDK for Eclipse, and an Azure accelerator for Tomcat. Their latest offering is the “Windows Azure Starter Kit for Java,” which provides tools for packaging and uploading Java-based Web applications running on Tomcat or Jetty. In considering this, the main question that comes up is “Why?”

It doesn’t work

image “It doesn’t work” is an extreme statement, isn’t it? And Microsoft has demonstrated that it can create Java Web apps and run them on Azure, so why do I say it doesn’t work? The problem is that these examples are extremely constrained.

imageFor example, Azure makes a virtue of its lack of a persistence mechanism. Instances can fail or restart at any time, which means that data isn’t persistent between instances, and applications therefore must not depend on persistent data. However, both Java Web applications and the servers they run on do depend on some sort of persistence or state. With effort, the applications can be re-engineered, but one has to wonder whether it’s worth the effort to do this, or whether the time might be spent moving to a different cloud offering where this re-engineering doesn’t need to be done.

There’s also the problem that the Tomcat and Jetty servers themselves require persistent data to be stored. And the problem gets even worse when we go from a simple servlet container to a full-fledged application server like JBoss, WebLogic, or WebSphere: application servers, and the Java EE programs that run on them, rely even more deeply on persistent data. While some Java EE application servers can be altered to use alternative persistence mechanisms like Azure storage, the process is arcane to most Java EE developers and not worth the trouble; it would probably be simpler to use a cloud offering where the application server can be deployed without alteration. 

In addition, a default application server relies on an extensive array of network endpoints for a variety of protocols that exceeds the number allowed by a worker role or a VM role. To run an app server on Azure, it is necessary to cut down the number of endpoints to the point where much useful functionality is lost. While it may be possible to construct Java EE examples that work as demos, it’s unlikely that any real Java EE apps, web-enabled or otherwise, can be migrated to the Azure cloud without drastic, impractical or impossible, modifications to the underlying application servers in order to accommodate the persistence and networking issues.

It’s not what users want

Beyond the technical issues in getting an app server running on the Azure platform, we need to ask why we would want to do this on a Platform-as-a-Service (PaaS) such as Azure, when it would be far simpler to run such an application on an Infrastructure-as-a-Service (IaaS) offering like Amazon EC2. It’s one thing to say it can be done; it’s another thing to actually want to do it, as opposed to the easier alternatives.

The market seems to bear this out – a recent Forrester study shows that Eclipse (that is, Java) developers prefer Amazon EC2 or Google App Engine, while Visual Studio (that is, .NET) developers prefer Windows Azure. Developers really don’t want to go through the contortions of packaging up their Java app plus the app server or servlet container, then configure and start it up as a special action under elevated privileges in an Azure worker role, just so that they can run Java EE, when they can easily install their application on a convenient Amazon EC2 image.

What users do want, it doesn’t do

Users will want to do things with Java on Azure, but not what the creators of the Azure Starter Kit for Java think they want to do. Rather than running a self-contained Java server in an Azure role (something they can more easily do elsewhere), they will want to integrate their Java with the .NET code more directly supported by Azure. For example, they may well have a Java library or application that they want to integrate with their .NET application. Depending on the Java side’s architecture, the Java might run in the same process as the .NET code, or it might run in its own process, or even a separate worker role. In any case, the Java side wouldn’t need to run in a full-fledged app server; it would simply expose an API that could be used by the .NET application.

A scenario like this is exactly the sort of thing that JNBridgePro supports. Java can be called from .NET code, and can run in the same process or in separate processes running on different machines. Up until now, those JNBridgePro deployments have been in on-premises desktop and server machines. In our upcoming JNBridgePro Cloud Edition, it will be just as straightforward to implement these interoperability scenarios in the cloud. [Emphasis added.]

In summary, there’s a role for Java in the Azure cloud, but we think Microsoft is pushing the wrong scenarios. The Azure Starter Kit for Java is clever, but it (incompletely) solves a problem that cloud developers don’t have, while ignoring the real-world problems that cloud developers do have.

Wayne is CTO of JNBridge and his post is arguably a sales pitch.


• Fernando Garcia Loera reported the availability of the Windows Azure Platform Training Kit - April Update in a 4/25/2011 post:

Overview

image The Windows Azure Platform Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to help you learn how to use the Windows Azure platform, including: Windows Azure, SQL Azure and the Windows Azure AppFabric.

imageThe April 2011 update of the Windows Azure Platform Training Kit has been updated for Windows Azure SDK 1.4, Visual Studio 2010 SP1, includes three new HOLs, and updated HOLs and demos for the new Windows Azure AppFabric portal.
Some of the specific changes with the April update of the training kit includes:

  • [New] Authenticating Users in a Windows Phone 7 App via ACS, OData Services and Windows Azure lab
  • [New] Windows Azure Traffic Manager lab
  • [New] Introduction to SQL Azure Reporting Services lab
  • [Updated] Connecting Apps with Windows Azure Connect lab updated for Connect refresh
  • [Updated] Windows Azure CDN lab updated for CDN refresh
  • [Updated] Introduction to the AppFabric ACS 2.0 lab updated to the production release of ACS 2.0
  • [Updated] Use ACS to Federate with Multiple Business Identity Providers lab updated to the production release of ACS 2.0
  • [Updated] Introduction to Service Bus lab updated to latest AppFabric portal experience
  • [Updated] Eventing on the Service Bus lab updated to latest AppFabric portal experience
  • [Updated] Service Remoting lab updated to latest AppFabric portal experience
  • [Updated] Rafiki demo updated to latest AppFabric portal experience
  • [Updated] Service Bus demos updated to latest AppFabric portal

Release: April Update
Version: 2.9

Download

Saludos

Fernando is Microsoft’s Community Program Manager for the Latin America Region


Wely Lau described Combining Web and Worker Role by Utilizing Worker Role Concept in a 4/25/2011 post:

image I am very much excited to write this post as I believe not many people realized about it and I can tell that it would be very helpful in many scenario. Steve actually mentioned this in MIX 2010 in the session entitled 10 Things You Didn’t Know You Could Do with Windows Azure.

Always Start with an Introduction

imageLet [us] refresh our mind that Windows Azure Service Roles (Web Role and Worker Role) are actually a provisioned VM [that] runs on Windows Azure.

  • The web role provides [an] out-of-the-box IIS 7 environment which allows us to host our application.
  • While [a] worker role is an “almost” empty VM which enables us to do whatever we like to do, specifically in the while loop within Run method.
public override void Run()
{
    // This is a sample worker implementation. Replace with your logic.
    Trace.WriteLine("WorkerRole1 entry point called", "Information");

    while (true)
    {
        //put our code and logic here...

        Thread.Sleep(10000);
        Trace.WriteLine("Working", "Information");
    }
} 
What We Can Do with the Worker Role

The Worker role is indeed very flexible as we can do many things. Some common patterns that we can utilize worker roles are:

  • Performing background or batch processing
  • Hosting non-http service hosting (example: WCF)
  • Running other web / application server (eg: Ruby, Java, Mongoose)

For compute-intensive tasks, it’s definitely fine spend a dedicated worker role to perform the task. However, in many scenario, we only need to perform a simple and non-compute intensive task. These could be:

Cost effective

It would be somehow too wasteful to create 2 instances of worker role just to run those tasks. (Remember in compute SLA, you are required to have at least 2 instance to meet the 99.95% uptime).

Considering the cost for small size VM, it could simply cost you about:

$ 0.12 X 24 hours X 30.5 days / month X 2 instance = USD 175.68

So, how can we save those cost?

Combining Web and Worker Role

I refer to the introduction that we’ve discussed above to refresh your mind on Web and Worker Role. For both Web and Worker Role, we generally have either WebRole.cs and WorkerRole.cs file within the project. The class in these two files inherit from RoleEntryPoint, which we can override some methods.

public abstract class RoleEntryPoint
{
    protected RoleEntryPoint();

    public virtual bool OnStart();

    public virtual void OnStop();

    public virtual void Run();
}

These methods are meant to be overridden when either on the role is starting (OnStart), the role is stopping (OnStop), and the role has done OnStart and ready to run (Run). This is pretty interesting since most people don’t realize that the entire role instance lifecycle is available in web roles just as it is in worker roles.

The idea is actually override the Run method in the WebRole, which generally provided in the WorkerRole.cs, but not in the WebRole.cs. However, you could simply override it, and would become something like this:

public class WebRole : RoleEntryPoint
{
    public override void Run()
    { 
        while (true)
        {  
            var account = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
            var context = new TaskDataServiceContext(account.TableEndpoint.ToString(), account.Credentials);

            context.AddTask(new Random().Next(100).ToString());
            System.Threading.Thread.Sleep(3000);
        }
        
    }
    public override bool OnStart()
    { 
        return base.OnStart();
    }
}

In the example above, I override the Run method to perform a simple task, just to add a record in the table storage on every 3 seconds. Of course, you can add anything you want, typically just like what you’ve written in the worker role.

What’s actually happening on the Cloud is it will create a separate executable process that keep perform the Run method, while the IIS is still listening to the web traffic.

Is it an ideal solution?

Well, isn’t it an ideal solution? Well, the answer is pretty much depends!

If you are sure that your batch-processing jobs (inside Run) method won’t consume too many CPU / memory AND you don’t mind to share the web resource with it, then that’s fine!

However, if you want to have separate VM that performs the particular task without affecting the web role, then you are encouraged to go with WorkerRole.


The Windows Azure Team reported Just Announced: LinkShare Chooses Windows Azure for its Global Performance Marketing Application Solution on 4/25/2011:

image After assessing cloud computing offerings from Amazon Web Services and Google Apps Engine, online performance marketing firm LinkShare announced today that it is using the Windows Azure platform for development of LinkShare Lightning.  LinkShare Lightening is the next generation of its online performance marketing application. The firm selected the Windows Azure platform to enable its developers to create a solution that would easily scale to handle peak holiday-season Internet traffic without requiring the costly investment in IT infrastructure.

imageUsing Windows Azure, SQL Azure, SQL Server 2008 Enterprise and .NET Framework 4, LinkShare developers created a solution they predict will easily handle more than 1 billion impressions per day and deliver annual savings of up to several million dollars in hardware, software and development costs.  These savings will allow LinkShare to allocate more resources to research, innovation and customer feature requests.

To learn more about this announcement, click here to read the press release.  To learn more about LinkShare's experiences with the Windows Azure platform, you can read the case study here.  Click here to visit the LinkShare website.


Bruce Kyle reported a Microsoft US Offer: Verify Your App Is Compatible in Microsoft Platform Ready, Get Microsoft Office Pro in a 4/25/2011 post to the US ISV Evangelism blog on 4/25/2011:

image Verify your application compatible in Microsoft Platform Ready with Windows 7, Windows Server 2008 R2, SQL Server 2008, SQL Server Express, SQL Server 2008 R2 or Windows Azure between today and May 31, 2011 to receive a free copy of Microsoft Office Professional 2010.

In a few steps, you verify your application is compatible and receive marketing benefits to help your company grow.

mpr1IMPORTANT NOTE: You must use the same Live ID in both Microsoft Platform Ready and Microsoft Partner Network. There is no test to pass. But your MRP account must be linked to your Microsoft Partner Network ID.

imageEligible companies that register a compatible application on the http://www.microsoftplatformready.com site shall receive a copy of Microsoft Office Professional 2010 “Not for Resale” edition.

How to Verify Your Application is Compatible

1. If you haven’t done so, sign up for Microsoft Partner Network. It’s free. As a member of the Microsoft Partner Network, you’ll have access to benefits that can help you deliver and support creative solutions that add customer value and position you as a trusted advisor. All organizations have a home within the Microsoft Partner Network. And new membership opportunities help ensure that our partners can thrive.

image2. If you haven’t done so, sign up for Microsoft Platform Ready.

3. Sign into Microsoft Platform Ready with the same LiveID you used for your MPN membership.

4. Register your application. You’ll be asked about the name of your application and the technologies that it works on.

5. You can self-verify that your software works with SQL Server and the other technologies. This is a self-verification to let us know that you are compatible. You do not have to perform the MPR Tests to verify your app.

Click Test tab,  then click Verify, then click the versions that your software is compatible with. Note: Self-verification will not count for the ISV Competency but will qualify you for the marketing benefits in MPR.

image

Once you mark your application compatible, you’ll see it listed in MPR.

image

(In my example I opted in to share my results on Facebook, but such sharing is optional.)

Promotion Eligibility

To participate in this promotion company must meet the following eligibility criteria:

  • Company must be legally domiciled within the 50 United States (including the District of Columbia), and;
  • Company must be an active member of the MPN and MPR programs, and;
  • Before May 31, 2011, company must register or sign in to Microsoft Platform Ready and profile a compatible application for any of the following qualifying Microsoft products: Windows 7, Windows Server 2008 R2, SQL Server 2008, SQL Server Express, SQL Server 2008 R2 or Windows Azure.
    NOTE: Only the first new compatible application per qualifying technology per company will be accepted. By registering a compatible application company confirms that you have either self-verified that the application is compatible or successfully completed the online compatibility test. The Live ID used to register the compatible application shall be used to identify company’s representative.

View Promotion Terms & Conditions

Only the first new compatible application per qualifying technology per company will be accepted. This offer is limited to a maximum of four gifts per company. While supplies last as promotion is limited to the first 2,500 registered compatible applications on MPR.

Be Sure Your MPR Account Is Linked to MPN

We need the information in Microsoft Platform Network to send you your copy of Office Professional, so you’ll want to be sure the two accounts are linked.

Log into Microsoft Platform Ready using your LiveID. Click Edit Profile.

image

Check the bottom of the page to see if you’re accounts are linked.

image

If not, click Microsoft Partner link and sign in.

Your copy of Office Professional awaits.

Take the Next Step

Once you have verified your app is compatible, you can take the next step and test your application. Testing your app is optional to the offer, but there are more benefits by passing the test and sending the results to MPR. You’ll be taking one of the steps to earn the ISV Competency, which will help your company receive licenses for Microsoft software. Download the tool to get started.

About Microsoft Platform Ready

Microsoft Platform Ready (MPR) is designed to give you what you need to plan, build, test and take your solution to market. We’ve brought a range of platform development tools together in one place, including technical, support and marketing resources, as well as exclusive offers.

Whether you're building simple add-ins, rich client applications, complex client and server solutions or planning your future with cloud services, you can access training, support, testing and marketing resources to help you take your solution to market faster. 

I put my OakLeaf Systems Azure Table Services Sample Project - Paging and Batch Updates Demo project through the MPR process a couple of months ago, so it doesn’t qualify for the gift offer. I understand that an alternative to an Office Professional 2010 license is a $250 check from Microsoft.


<Return to section navigation list> 

Visual Studio LightSwitch

• SearchCloudComputing.com (@TTintheCloud) published My (@rogerjenn) Microsoft brings rapid application development to the cloud article on 4/26/2011:

image As the cloud computing wars heat up, vendors need to provide distinct features for their customers. Visual Studio LightSwitch represents an attempt by Microsoft to deliver Access-like rapid application development capabilities to .NET Windows and Web database front-ends.

image2224222222LightSwitch is an application framework and development environment for quickly building forms-over-data desktop or Web apps with minimal or no Visual Basic or C# code. It targets enterprise business analysts and power users that need networked line-of-business (LoB) apps with the data entry and analytic features of Microsoft Access and FileMaker databases or Excel worksheets but aren't programmers.

image LightSwitch applications automatically generate three-tier architecture with a Silverlight 4.0 presentation tier that can run on users' desktops or implement the Model-View-ViewModel (MVVM) pattern for their Web browsers.


Figure 1. Visual Studio LightSwitch's three-tier architecture (Click to enlarge)

The middle logic tier, which includes Entity Framework (EF) v4, can be hosted on end-users' machines, on an Internet Information Services (IIS) server or, as of Beta 2, in a Windows Azure Web role. The data tier can be SQL Server, SQL Server Express, SQL Azure in the cloud, other relational databases that have an EF v4 provider, SharePoint 2010 lists, custom Windows Communication Framework (WCF) RIA Domain Services or even flat CSV files. If you don't have a data source, LightSwitch uses EF v4's model-first capability to generate a SQL Server 2008 R2 Express database.

The article continues with details of LightSwitch Beta 2’s new features and a comparison with the development time required to create a Internet-accessible Access Web database. For the details of how I published an Access 2010 database project to a Web Database running on SharePoint 2010 Enterprise Edition’s Access Services hosted by AccessHosting.com, check out my 00:45:00 Webcast at Upsizing Access 2010 Projects to Web Databases with SharePoint 2010 Server. (Free site registration required.)

Click here to read the rest of the SearchCloudComputing.com article.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

Rich Miller reported Microsoft Reveals its Specialty Servers, Racks in a 4/25/2011 post to the Data Center Knowledge blog:

  • The design concept for the Microsoft servers that power its global cloud computing platform. Click for a larger version of the image.

    As it seeks to slash power usage across its global cloud computing platform, Microsoft has been refining its designs for energy-efficient data center hardware. The company is now sharing the details of those designs, which feature custom servers, a high-efficiency power distribution system and in-rack UPS units.

    “When Microsoft saw its server counts and data center footprint growing, it became clear that we needed to improve efficiency,” said Dileep Bhandarkar, a Distinguished Engineer at Microsoft, who oversees the company’s server hardware architecture. Bhandarkar began working with server vendors (OEMs) to develop specialized designs based on specs developed by the Microsoft server team. He recently discussed these designs with Data Center Knowledge.

    Over the past three years, Microsoft has openly shared information at industry conferences about its data center modules, known as IT-PACs. But it has offered fewer details about the racks and servers inside those containers, limiting disclosures to events for small groups of industry researchers.

    Half-Width Server Design

    Microsoft’s servers for its cloud-scale services – including Bing, Hotmail and Windows Azure – are based on a  half-width design that fits 2 servers into a 1U slot in a rack. Each server board houses two CPU sockets, with room for 4 DIMM memory slots in each socket.

    This design allows Microsoft to fit as many as 96 servers into an extra-tall 57U rack, pushing power densities as high as 16 kilowatts per rack. Each rack includes at least two sets of battery packs providing short-term backup power, allowing Microsoft to operate without a central UPS system.

    Microsoft also saves energy in its power distribution system, which brings 480V, three-phase AC power directly to the rack, where power supplies convert it to 12V DC power output for the servers. This eliminates power losses from the multiple AC-to-DC conversions common in centralized UPS systems. The power supplies and rack-level UPS units each support 24 servers, and occupy 3U of rack space.

    In the IT-PAC, the servers have no fans, using air pressure within the module to manage airflow through the servers. That’s an extension of the data center team’s focus on eliminating unnecessary components on each server . “We’ve been doing that for the last three years,” he said. “When I came to Microsoft, every server had a DVD drive.”

    Optimizing Server Energy Usage

    The Microsoft team optimized its servers to use 40 to 60 watts of power (depending upon whether it uses 4 cores or 6 cores). The design emphasizes high-efficiency components that can operate in expanded ranges of temperature (up to 90 degrees F) and humidity.

    Microsoft developed two sets of specifications for its server vendors:

    • One spec optimized for homogenous deployments of “scale-out” services like Bing, Hotmail and Windows Azure. The spec is designed for large bulk purchases of servers delivered in pre-populated racks, with new RFPs for each design refinement.
    • A second spec based on generic standards for an internal catalog offering a dozen different configurations for smaller business groups within Microsoft. The company typically works with a single vendor for 12 to 18 months at a time to maintain continuity.

    Bhandarkar says Microsoft has worked with vendors to reduce power use when processors are idle.  “It used to be an idle server would be 50 percent  of the power (used when the server is active),” he said. “We’ve pushed that down to about 30 percent.”

    Another goal is to reduce the power draw from DIMM memory modules. “The DRAM industry is getting the green bug,” said Bhandarkar. “The market is shifting from 1.5V to 1.35V, and heading to 1.2V.”

    ‘Lots of Opportunities’ for Innovation at the Rack

    “We’re working with a lot of the processor vendors, and helping them understand when our server loads look like,” said Bhandarkar. ” There lots of opportunities to optimize rack level power and cooling.”

    Shifting the UPS and battery backup functions from the data center into the server cabinet reduces power losses from multiple conversions that occur between the utility power grid and the data center equipment in a standard UPS architecture. “”We did enough analysis to convince people this (rack-level UPS) could be done,” said Bhandarkar.

    On the power distribution front, Microsoft examined a number of options, but focused on the power supply. “The entire data center industry revolves around 480 volts and the server industry likes 208 or 415,” said Bhandarkar. “This doesn’t make any sense. The infrastructure for 480V is too big change, so were driving the server industry to change power supplies.”

    Part of Broader Industry Conversation

    The use of rack-level UPS and streamlined power distribution are similar in concept (if not execution) to cloud-scale system refinements that have been publicly discussed by Google and the Facebook Open Compute project. Bhandarkar says the disclosures are helpful in advancing innovation and best practices in the data center.

    “This is not rocket science,” said Bhandarkar. “Smart people facing the same problems will come up with similar solutions. Driving the entire industry forward helps us in the long run.”

    While Microsoft has optimized its design, Bhandarkar avoids the word “custom” in describing its process. “I call us a leading-edge adapter,” he said. “Our stuff is not custom. We don’t own the IP (intellectual property). We encourage our vendors to sell it to others so they can recoup their investments. We would like them to sell it to the industry at large. ”


  • Lori MacVittie (@lmacvittie) asserted IT as a Service requires commoditization. Commoditization implies standardization. The network needs standardization, and that’s only going to happen via a common API and semantic model as a preface to her API Jabberwocky: You Say Tomay-to and I Say Potah-to post of 4/25/2011 to F5’s DevCentral blog:]

    image Randy Bias of Cloudscaling apparently set off a firestorm at Cloud Connect 2011, stating with typical Randy forthrightness: “API's don't matter.”

    It’s not something we haven’t heard before. In fact, it’s not something I haven’t said myself, in a way. Randy wasn’t really questioning the need for APIs, that’s a given. What he was getting at was to question the need for standardization of APIs.

    image Within IT, particularly in development organizations, the API has become been the primary method of  integration. Applications needing to invoke the functionality of another application or system leverage API calls to perform some task: add this data, retrieve this data, process this data. When SOA was at its peak, the focus was properly on the abstraction of business functions at the API level, to provide for reuse and consistency of process across applications.

    The focus of “cloud” APIs and infrastructure APIs has been on interoperability. While a standardized API is indeed one way to achieve interoperability,  i.e. the ability to migrate operational processes across dissimilar environments, there are reasons why such a goal may be considered impossible to achieve. Consider Mike Fratto’s  recent commentary on APIs and cloud computing , noting that “feature variation between vendors” is too broad to expect agreement on which functions make up a common base from which such an interoperable infrastructure API could be designed.

    quote-badge

    Of course APIs matter, but having a functional standardized cloud service management API doesn't and, I'd argue, never will.

    The semantics of calling a method on vendor A's API or Vendor B's API will be different because of how the API's are implemented, but there is likely enough feature variation between vendors that even agreeing on what functions make up the most common denominator is probably impossible to settle. 

    -- Mike Fratto, “Standardizing Cloud APIs is Useless”, Network Computing (March 2011)

    Mike is right; variances across infrastructure solutions in terms of features and functions is broad and makes a common API representing those features and functions a daunting if not impossible task. That said, I would argue that achieving API parity – seen as necessary for interoperability - is not necessarily the only goal of infrastructure APIs today. Let us consider that the goal of infrastructure and cloud APIs is to present a common interface for the implementation of operational functions.  

    SERVICE-ORIENTED OPERATIONS
    The management of infrastructure components today – in most cases – can be accomplished via a standards-based (i.e. protocol interoperable) service-enabled SDK. These SDKs provide standards-based access to to just about every method necessary to manage the infrastructure.

    But these are not “APIs” in the way in which we need them to be APIs; they are not service-oriented themselves, but merely leverage service-oriented protocols as a means to exchange the data necessary to be managed.

    What is needed at the infrastructure level is service-oriented operations; the abstraction of operational functions into an API that can be implemented commonly across clouds, environments and infrastructure. Commonality across such operational tasks do exist. Despite the very broad differences in application network infrastructure (application delivery controllers, load balancers, WAN optimization controllers) there are still common operational tasks that can certainly be abstracted and specified by a common API. The differences are in the features and specific configuration, not necessarily the tasks. Configuring a Virtual Server (or Virtual IP Address, VIP) is an operational task that requires the same core functions across multiple vendor implementations. What we have under the hood is semantic differences that exacerbate existing feature disparity:  you say tomato, I say potah-to.

    Therein lies the opportunity for standardization – at the operational task layer of the IT stack.

    DOES it MATTER? 
    The question was does a standardized API matter or not? Assuming we can get to operational function parity across the infrastructure, does it change anything? The differences in features that could be invoked via an API make true seamless interoperability a likely unattainable goal.

    Mike goes on to say that if we cannot achieve such a goal, standards are unnecessary: “The goal of IT standards is for products and technologies to foster interoperation. If standards can't result in interoperation, then the standard is useless”. Mike’s view of the application of standards and their benefits is limited to interoperability. I disagree that interoperability is the only benefit provided by standards and that this is part of the transformation of IT – and networking in particular – that must happen ere IT as a Service can come to fruition. The ultimate goal is to provide a portfolio of infrastructure services with the implementation being not irrelevant, but loosely coupled; separated from its interface in such a way that components – or even providers - can be switched out without disruption.

    Mike’s argument is based upon the premise that there is only one purpose for IT standards: interoperability. Interoperability isn’t the only raison d’être for APIs and standardization. That may have been true in the past, but that is no longer true. Standardization also, as we are so often reminded by cloud pundits, enables commoditization, which is a key step toward reducing the cost of resources in the data center. APIs and standardization also matter to devops, to the operational developer who needs to automate and integrate infrastructure by codifying operational processes. They need APIs as a means to scale operations along with servers and applications and users. Standardized APIs (or at least a standardized data center infrastructure model) would significantly reduce the time required for integration simply by reducing the number of models and interfaces devops needs to learn. It also subsequently reduces the possibility of introducing repeatable errors into such operations by allowing devops to focus on the process, not specific products or their APIs. While SDKs are available, they often require intimate knowledge of the solution as well as the standards and protocols by which such SDKs are used. The difference in semantics alone is enough to put off even the most stalwart operations developer. APIs mitigate these differences, offering ease-of-use, as it were, and a simpler model of interaction that is necessary for devops to not only automate, but automate efficiently. The difference between 5-8 SDK function calls, over the network, and a single API call are huge in terms of performance, reliability and the need for idempotency in infrastructure operations. It’s not just an end-user thing, it’s an operational risk mitigation thing as well.

    I would argue at this point that yes, APIs and a common operational model (i.e. standardized) are necessary to the future growth of cloud computing and highly dynamic data center architectures. The commoditization of operational processes necessary to achieve economy of scale requires standardization. Without such common ground the ability to automate operational processes is negatively impacted and reduces the chance and benefits afforded by moving to such a model in the first place.

    SOA ultimately failed to achieve on its goals not because it was a poor idea but because implementations failed to recognize that the service interface was about common business functions, not application functions. Cloud and infrastructure APIs and integration need to focus on a standardized (or at least de facto standardized) service interface that encapsulates common operational functions. APIs – and standardization of the models -do matter at the infrastructure layer of a data center architecture in the sense that they provide the foundation upon which operations can be automated and ultimately integrated in to the data center orchestration engines that will emerge in future iterations to replace many of the home-grown scripts and solutions leveraged today out of necessity.

    APIs do matter, but not solely for purposes of interoperability. Rather the goal is to enable the scalability of operations through automation. Standardization of common operational components (functions and the core model) would go along way toward enabling that scalability to occur more efficiently.


    Kevin Remde (@kevinremde) posted Manage Your Windows Azure Cloud (“Cloudy April” - Part 25) on 4/25/2011:

    image Let me ask you something… Are you like many IT Pros I talk to Windows Azure about, who think, “Oh.. that’s cool.  But it’s for developers.  How am I going to manage it?”

    “Yeah.. that’s what I’m thinking!  It’s like you can read my mind!”

    imageExactly.  And I’ve heard it a lot from the IT Pros I’ve talked to, and quite honestly I thought it myself when Windows Azure was first introduced.  And also, for a while there I was frustrated that Microsoft didn’t have a better answer when it came to automating or otherwise controlling and monitoring your Windows Azure workloads; though I knew that more and better solutions than just watching some stream of logging information were “in the works”.  Fortunately, now we’ve got some good solutions for you; and even more on the way.  So I thought I’d take a minute to list some of the tools and options that are available, and some that are still-to-come, regarding the management of Windows Azure and SQL Azure.

    The first thing you’ll want to do is walk through some of the free training guides.

    “But Kevin.. that’s for developers.”

    No.. not entirely.  Yes, sure you will want to install the platform and the training kit samples, but you won’t have to do any coding.  The training kit comes with the fully-completed example applications that you can quickly compile and package up for putting up into your trail or Windows Azure Pass (Promo code: TNAZURE) account.  And once you have that, the training walks you through the important steps of configuring storage, loading your application using the Windows Azure Management Portal, and working with the web-based management.  Once you’ve got that down, further exercises show you how to use Windows Windows PowerShell to securely manage and control you Windows Azure applications. 

    Manage this!

    Also on the subject of PowerShell for Windows Azure, you really should watch Max Adams’ “How Do I” video on TechNet: http://technet.microsoft.com/en-us/ee957677.aspx

    Second, you might take a look at the MMC.

    “Really?  There’s a snap-in for the MMC?”

    Yes – The Windows Azure Management Tool.  It’s a non-MS-Supported tool, but it does a lot for you, such as managing your hosted services, monitoring diagnostics on performance and events, managing certificates, configuring storage, etc.  It is even extensible, and drives PowerShell to do its work. 

    Ryan Dunn has also put together a nice 15-minute introductory video on the tool.

    And finally, we have a release candidate of a Windows Azure Application Monitoring Management Pack that you can use with System Center Operations Manager.  Here is the description from the download page:

    Overview
    The Windows Azure Monitoring Management Pack enables you to monitor the availability and performance of applications that are running on Windows Azure.

    Feature Summary
    After configuration, the Windows Azure Monitoring Management Pack offers the following functionality:
    • Discovers Windows Azure applications.
    • Provides status of each role instance.
    • Collects and monitors performance information.
    • Collects and monitors Windows events.
    • Collects and monitors the .NET Framework trace messages from each role instance.
    • Grooms performance, event, and the .NET Framework trace data from Windows Azure storage account.
    • Changes the number of role instances via a task.

    To summarize: Here are the tools mentioned above, plus a few extras, that will help you get started in learning how to manage and monitor Windows Azure and Windows Azure applications:

    ---

    What are you using or hoping to use to manage your Windows Azure platform and your applications or storage?  Are you using any other methods you’d like to share with us?  We’d love to hear from you in the comments.

    In part 26 of the series I’m going to introduce to you and discuss a Windows Azure-based IaaS that is not really IaaS.  (Huh?)

    Let me ask you something… Are you like many IT Pros I talk to Windows Azure about, who think, “Oh.. that’s cool.  But it’s for developers.  How am I going to manage it?”

    “Yeah.. that’s what I’m thinking!  It’s like you can read my mind!”

    Exactly.  And I’ve heard it a lot from the IT Pros I’ve talked to, and quite honestly I thought it myself when Windows Azure was first introduced.  And also, for a while there I was frustrated that Microsoft didn’t have a better answer when it came to automating or otherwise controlling and monitoring your Windows Azure workloads; though I knew that more and better solutions than just watching some stream of logging information were “in the works”.  Fortunately, now we’ve got some good solutions for you; and even more on the way.  So I thought I’d take a minute to list some of the tools and options that are available, and some that are still-to-come, regarding the management of Windows Azure and SQL Azure.

    The first thing you’ll want to do is walk through some of the free training guides.

    “But Kevin.. that’s for developers.”

    No.. not entirely.  Yes, sure you will want to install the platform and the training kit samples, but you won’t have to do any coding.  The training kit comes with the fully-completed example applications that you can quickly compile and package up for putting up into your trail or Windows Azure Pass (Promo code: TNAZURE) account.  And once you have that, the training walks you through the important steps of configuring storage, loading your application using the Windows Azure Management Portal, and working with the web-based management.  Once you’ve got that down, further exercises show you how to use Windows Windows PowerShell to securely manage and control you Windows Azure applications. 

    Manage this!

    Also on the subject of PowerShell for Windows Azure, you really should watch Max Adams’ “How Do I” video on TechNet: http://technet.microsoft.com/en-us/ee957677.aspx

    Second, you might take a look at the MMC.

    “Really?  There’s a snap-in for the MMC?”

    Yes – The Windows Azure Management Tool.  It’s a non-MS-Supported tool, but it does a lot for you, such as managing your hosted services, monitoring diagnostics on performance and events, managing certificates, configuring storage, etc.  It is even extensible, and drives PowerShell to do its work. 

    Ryan Dunn has also put together a nice 15-minute introductory video on the tool.

    And finally, we have a release candidate of a Windows Azure Application Monitoring Management Pack that you can use with System Center Operations Manager.  Here is the description from the download page:

    Overview
    The Windows Azure Monitoring Management Pack enables you to monitor the availability and performance of applications that are running on Windows Azure.
    Feature Summary
    After configuration, the Windows Azure Monitoring Management Pack offers the following functionality:
    • Discovers Windows Azure applications.
    • Provides status of each role instance.
    • Collects and monitors performance information.
    • Collects and monitors Windows events.
    • Collects and monitors the .NET Framework trace messages from each role instance.
    • Grooms performance, event, and the .NET Framework trace data from Windows Azure storage account.
    • Changes the number of role instances via a task.

    To summarize: Here are the tools mentioned above, plus a few extras, that will help you get started in learning how to manage and monitor Windows Azure and Windows Azure applications:

    ---

    What are you using or hoping to use to manage your Windows Azure platform and your applications or storage?  Are you using any other methods you’d like to share with us?  We’d love to hear from you in the comments.

    In part 26 of the series I’m going to introduce to you and discuss a Windows Azure-based IaaS that is not really IaaS.  (Huh?)


    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    Hermann Daeubler presented TRND26 – SAP Deployments on Windows Azure (10 MB PDF) to the SAP Virtualization and Cloud Week held in Palo Alto on April 12 – 14, 2011:

    image

    imageFuture VM Role management via “Concero”

    ANNOUNCING System Center Project Codename “Concero” -Nexus SC: The System Center Team Blog -Site Home -TechNet Blogs …

    (c) Deploy and manage Services and virtual machines on private clouds created within VMM 2012 and on Windows Azure…

    (e) Copy Windows Azure configuration, package files and VHD’s from on-premises and between Windows Azure subscriptions…

    image

    SAP Note 1380654: SAP Support in Cloud Environments

    Public Cloud
    The support provided by SAP for productive system
    environments ensures that the support provided by the
    manufacturers of the IT stacks of hardware, virtualization
    technology, operating system and database is coordinated.
    This includes (among other things) the existence of support
    contracts and service level agreements among the
    manufacturers. This type of coordination does not currently
    exist in "public cloud“ environments.

    Therefore, the operating of SAP systems in a "public cloud"
    has not been released by SAP.

    Goals and Future Steps of Windows Azure PoC

    • Provide guidance / best practice for using Windows Azure
      VM Role for NetWeaver/ SAP Business Suite
    • Come to an official support statement from SAP and Microsoft regarding non-critical systems on VM Role
    • Evaluate support of mission critical SAP systems on VM Role
    • Evaluate options to run NetWeaveron SQL Azure
    • Evaluate options for service / hosting partners

    B1 Prototype – Hybrid Mode Scenario

    Web CRM
    •Add/Read/Update Business Partner (Simplified)
    •Add/Remove Business Opportunity (Simplified)
    •Sales Opportunity Pipeline Report with filtering

    Why this scenario?
    •Web CRM –a missing functionality
    •Already widely available SaaSapplication
    •Data intensive and business logic dependent

    image

    image

    Appendix:

    1. Links related to Windows Azure
    2. Uploading a VM to Azure
    3. Deploy VM out of Visual Studio
    4. Using virtual hostnames
    5. Differencing disks
    6. Certificates for Windows Azure access
    7. Including a VM on Windows Azure into a local domain
    8. Database backup/restore on Windows Azure


    <Return to section navigation list> 

    Cloud Security and Governance

    image

    No significant articles today.


    <Return to section navigation list> 

    Cloud Computing Events

    Jeff Price announced Mark your calendars. David Chappell, renowned speaker on Azure/Cloud Computing is coming to a new Meetup for San Francisco Bay Area Azure Developers Group on 5/23/2011 6:30 PM at Microsoft San Francisco:

    Announcing a new Meetup for San Francisco Bay Area Azure Developers Group!

    What: Mark your calendars. David Chappell, renowned speaker on Azure/Cloud Computing.
    When: Monday, May 23, 2011 6:30 PM

    Where: Microsoft San Francisco (in Westfield Mall where Powell meets Market Street)
    835 Market Street Golden Gate Rooms - 7th Floor
    San Francisco, CA 94103

    Why: When CIOs and CEOs meet, they often discuss "the cloud". We are tasked with recommending the right cloud platform (IaaS, PaaS or SaaS), public vs. private cloud, and this presentation will help us be prepared when asked to justify our decisions.


    David Chappell will be delivering an impactful presentation - Cloud Platforms Comparison.  David is a knowledgeable and entertaining speaker often hired by Microsoft to assist with Cloud Strategy and helping deliver the Microsoft "We're all in!" message.

    David Chappell is Principal of Chappell & Associates in San Francisco, California. Through his speaking, writing, and consulting, he helps people around the world understand, use, and make better decisions about new technology.

    David has been the keynote speaker for more than a hundred conferences and events on five continents, and his seminars have been attended by tens of thousands of IT leaders, architects, and developers in forty countries. His books have been published in a dozen languages and used regularly in courses at MIT, ETH Zurich, and many other universities. David has also been a Series Editor for Addison-Wesley and a columnist for several publications. In his consulting practice, he has helped clients such as Hewlett-Packard, IBM, Microsoft, Stanford University, and Target Corporation adopt new technologies, market new products, and educate their customers and staff.

    David's comments have appeared in The New York Times, CNN.com, and many other publications. Earlier in his career, he wrote networking software, chaired a U.S. national standardization working group, and played keyboards with the Peabody-award-winning Children's Radio Theater. David holds a B.S. in Economics and an M.S. in Computer Science, both from the University of Wisconsin-Madison.


    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    For the sake of even-handedness, here’s Mary Jo Foley’s (@maryjofoley) Whether it's Amazon or Microsoft, there's (still) no foolproof cloud post to ZDNet’s All About Microsoft blog of 4/21/2011:

    image Talk about strange timing: Yesterday, I heard from a business user of Microsoft’s Windows Azure cloud platform who said that his company had been taken down by an Azure storage outage that lasted for six hours on April 15.

    A day later, the Web is abuzz with news about an Amazon EC2 outage (going on 10 hours as I type this post) that seems to be centered around the company’s cloud storage components.

    Like Amazon does with AWS, Microsoft maintains visible dashboard pages showing the real-time status of all of its Azure-related components. From the Azure Storage page, it looks like there’ve been Azure storage problems resulting in “service degradations” on not just April 15 (in the North Central and South Central regions), but also on April 19 (in East Asia and Western Europe).


    (click on the image above to enlarge)

    I’ve asked Microsoft for more details about what specifically happened on April 15 that caused the reported downtime and am awaiting word back.

    Update (4/22): Microsoft isn’t saying much about the outage, other than to acknowledge it happened. The official response, delivered through a company spokesperson:

    “At 6:40 AM PDT on April 15th, Microsoft became aware of an issue that affected some customers using the Windows Azure Storage service in the North Central and South Central US regions. This issue has been resolved.  We regret any inconvenience the outage may have caused our impacted customers. As always, we will investigate the cause of this issue and take steps to better ensure it doesn’t happen again.”

    The user who contacted me — who asked not to be named — said he believed there was a misconfiguration during storage deployment that hit both North Central and South Central U.S. at the same time that affected the way the load balancers were sending traffic. The user wanted to know more details about exactly what happened and what Microsoft is doing to head off similar types of problems in the future.

    I’m not posting this to downplay what’s going on with Amazon’s EC2. Nor am I doing so because I’ve heard Microsoft or Microsoft partners trying to use Amazon’s EC2 outage as a way to paint Azure as superior. (In fact, one member of the Azure team tweeted today that he hoped no one at Microsoft would do such a thing.)


    Outages and glitches happen across the cloud, not just on the infrastructure side, but on the cloud apps side, too. They’re a good reminder about the importance of backup/redundancy and the need to distribute one’s cloud storage across multiple geographic locations, if and when possible, as one of my ZDNet UK colleagues tweeted today.


    • Carl Brooks (@eekygeeky) wrote Using the IBM Smart Business Cloud — Enterprise for SearchCloudComputing.com on 4/22/2011 (missed when posted):

    image So I was very excited when IBM officially launched its general purpose public cloud service. It was a validation of the cloud model for the enterprise; it was a chance to see what one of the premier technology vendors on the planet would deliver when it put shoulder to wheel on this exciting new way to deliver IT.

    Its got a growing user base too: check out the “Profile” screenshot at the bottom; not only do you get to see your IBM Cloud account, you get to see all your IBM friends, too- as of this writing, IBM’s cloud has 729 users running 2436 instances, 578 block stores and 666 images.

    Turns out it’s pretty much feeling its way along, just as Amazon Web Services (AWS) was 3-4 years ago.  Its…um…not polished, but it works. It’s a true public cloud experience, even if the the pricing is logarithmic in scale rather than incremental (goes from “quite reasonable” to “Oh My God” fairly quickly). You click and provision storage, instances, and so on. But it feels a little raw if you’re used to RightScale, AWS Managment Console and so on. It’s very bare bones at the moment.

    It’s also abundantly clear that the IBM Smart Business Cloud- Enterprise (SBC-Enterprise) is exactly the same as the IBM Smart Business Development and Test Cloud. The transition to “enterprise class public cloud” is simply hanging a new shingle on the door. See the screenshots below, they haven’t really finished transitioning the brand on the portal pages and it’s all over the documentation too. The test and dev cloud and the SBC-Enterprise cloud are one and the same.

    But that’s fine by me- if IBM wants to take their dev cloud infrastructure and call it Enterprise, they can do that. I’m not aware of any conceptual reasons for not doing production in a cloud right next to where you do test and dev, besides the expectations for uptime and support.

    Click here to read more.

    Stay tuned for my illustrated guided tour of signing up for IBM’s free Spring 2011 Promotion for their Enterprise Cloud IaaS offering. When I signed up on 4/26/2011, there were 749 users, 2411 instances, 596 storage units and 658 stored images. That’s only 20 new users with fewer instances over four days (but that included the Easter weekend.)

    Here are the details of my Windows Server 2008 R1 Silver instance with 1 storage unit:

    image

    Windows Azure developers, many of whom complain about the time to provision a new server instance, will be interested to see the 32 minutes and 29 seconds required by the IBM Cloud to activate a Windows Server 2008 R1 instance.

    The Control Panel says my free instance expires in 730 days, but I believe the Spring 2011 Promotion ends in June 2011.


    Todd Hoff delivered The Big List of Articles on the Amazon Outage via 4/25/2011 post to the High Scalability blog:

    imageSo many great articles have been written on the Amazon Outage. Some aim at being helpful, some chastise developers for being so stupid, some chastise Amazon for being so incompetent, some talk about the pain they and their companies have experienced, and some even predict the downfall of the cloud. Still others say we have seen a sea change in future of the cloud, a prediction that's hard to disagree with, though the shape of the change remains...cloudy.

    I'll try to keep this list update as more information comes out. There will be a lot for developers to consider going forward. If there's a resource you think should be added, just let me know.

    Experiences from Specific Companies, Both Good and Bad
    Amazon Web Services Discussion Forum

    A fascinating peek into the experiences of people who were dealing with the outage while they were experiencing it. Great real-time social archeology in action.

    There were also many many instances of support and help in the log.

    Lessons Learned and Other Insight Articles


      The Amazon Web Service Status Dashboard reported the following details for Instance connectivity, latency and error rates of its Amazon Elastic Compute Cloud (N. Virginia) service:

      image Posts from previous days are available below under Status History.

      Apr 24, 1:38 AM PDT We are continuing to recover remaining stuck EBS volumes in the affected Availability Zone, and the pace of volume recovery is now steadily increasing. We will continue to keep you posted with regular updates.

      Apr 24, 3:12 AM PDT The pace of recovery has begun to level out for the remaining group of stuck EBS volumes that require a more time-consuming recovery process. We continue to make progress and will provide additional updates on status as we work through the remaining volumes.

      Apr 24, 5:05 AM PDT As detailed in previous updates, the vast majority of affected EBS volumes have been restored by this point, and we are working through a more time-consuming recovery process for remaining volumes. We have made steady progress on this front over the past few hours. If your volume is among those recently recovered, it should be accessible and usable without additional action.

      Apr 24 7:22 AM PDT No significant updates to report at this time. We continue to make steady progress on recovering remaining affected EBS volumes and making them accessible to customers.

      Apr 24, 9:59 AM PDT We continue to make steady progress on recovering remaining affected EBS volumes and making them accessible to customers. If your volume is not currently responsive, we recommend trying to detach and reattach it. In many cases that may restore your access.

      Apr 24, 11:36 AM PDT The number of volumes yet to be restored continues to dwindle. If your volume is not currently responsive and your instance was booted from EBS, you may need to stop and restart your instance in order to restore connectivity.

      Apr 24, 2:06 PM PDT We continue to make steady progress on recovering the remaining affected EBS volumes. We are now working on reaching out directly to the small set of customers with one of the remaining volumes yet to be restored.

      Apr 24, 7:35 PM PDT As we posted last night, EBS is now operating normally for all APIs and recovered EBS volumes. The vast majority of affected volumes have now been recovered. We're in the process of contacting a limited number of customers who have EBS volumes that have not yet recovered and will continue to work hard on restoring these remaining volumes.
      If you believe you are still having issues related to this event and we have not contacted you tonight, please contact us here. In the "Service" field, please select Amazon Elastic Compute Cloud. In the description field, please list the instance and volume IDs and describe the issue you're experiencing.
      We are digging deeply into the root causes of this event and will post a detailed post mortem.

      Apr 25, 1:09 PM PDT We have completed our remaining recovery efforts and though we've recovered nearly all of the stuck volumes, we've determined that a small number of volumes (0.07% of the volumes in our US-East Region) will not be fully recoverable. We're in the process of contacting these customers.
      If you are still having trouble with your volume, please contact us here.

      Here’s a capture of the Status History section:

      image 


      Adron Hall (@adronbh) posted Cloud Failure, FUD, and The Whole AWS O[u]tage… to the Cloud Ave blog on 4/25/2011:

      image Ok.  First a few facts.
      • AWS has had a data center problem that has been ongoing for a couple of days.
      • AWS has NOT been forthcoming with much useful information.
      • AWS still has many data centers and cloud regions/etc up and live, able to keep their customers up and live.
      • Many people have NOT built their architecture to be resilient in the face of an issue such as this.  It all points to the mantra to “keep a backup”, but many companies have NOT done that.
      • Cloud Services are absolutely more reliable than comparable hosted services, dedicated hardware, dedicated virtual machines, or other traditional modes of compute + storage.
      • Cloud Services are currently the technologically superior option for compute + storage.
      Now a few personal observations and attitudes toward this whole thing.

      If you’re site is down because of a single point of failure that is your bad architectural design, plain and simple. You never build a site like that if you actually expect to stay up with 99.99% or even 90% of the time. Anyone in the cloud business, SaaS, PaaS, hosting or otherwise should know better than that. Everytime I hear someone from one of these companies whining about how it was AWSs responsiblity, I ask, is the auto manufacturer responsible for the 32,000 innocent dead Americans in 2010? How about the 50,000 dead in the year of peak automobile deaths? Nope, those deaths are the responsiblity of the drivers. When you get behind the wheel you need to, you MUST know what power you yield. You might laugh, you might jest that I use this corralary, but I wanted to use an example ala Frédéric Bastiat (if you don’t know who he is, check him out: Frédéric Bastiat). Cloud computing, and its use, is a responsibility of the user to build their system well.

      One of the common things I keep hearing over and over about this is, “…we could have made our site resilient, but it’s expensive…”  Ok, let me think for a second.  Ummm, I call bullshit.  Here’s why.  If you’re a startup of the most modest means, you probably need to have at least 100-300 dollars of services (EC2, S3, etc) running to make sure you’re site can handle even basic traffic and a reasonable business level (i.e. 24/7, some traffic peaks, etc).  With just $100 bucks one can setup multiple EC/2 instances, in DIFFERENT regions, load balance between those, and assure that they’re utilizing a logical storage medium (i.e. RDS, S3, SimpleDB, Database.com, SQL Azure, and the list goes on and on).  There is zero reason that a business should have their data stored ON the flippin’ EC2 instance.  If it is, please go RTFM on how to build an application for the Internets.  K Thx. Awesomeness!!  :)

      Now there are some situations, like when Windows Azure went down (yeah, the WHOLE thing) for about an hour or two a few months after it was released.  It was, however, still in “beta” at the time.  If ALL of AWS went down then these people who have not built a resilient system could legitimately complain right along with anyone else that did build a legitimate system. But those companies, such as Netflix, AppHarbor, and thousands of others, have not had downtime because of this data center problem AWS is having.  Unless you’re on one instance, and you want to keep your bill around $15 bucks a month, then I see ZERO reason that you should still be whining.  Roll your site up somewhere else, get your act together and ACT. Get it done.

      I’m honestly not trying to defend AWS either.  On that note, the response time and responses have been absolutely horrible. There has been zero legitimate social media, forum, or responses that resemble an solid technical answer or status of this problem. In addition to this Amazon has allowed the media to run wild with absolutely inane and sensational headlines and often poorly written articles.  From a technology company, especially of Amazon’s capabilities and technical prowess (generally, they’re YEARS ahead others) this is absolutely unacceptable and disrespectful on a personal level to their customers and something that Amazon should mature their support and public interaction along with their technology.

      Now, enough of me berating those that have fumbled because of this. Really, I do feel for those companies and would be more than happy to help straighten out architectures for these companies (not for free). Matter of fact, because of this I’ll be working up some blog entries about how to put together a geographically resilient site in the cloud.  So far I’ve been working on that instead of this rant, but I just felt compelled after hearing even more nonsense about this incident that I wanted to add a little reason to the whole fray.  So stay tuned and I’ll be providing ways to make sure that a single data-center issues doesn’t tear down your entire site!


      Randy Bias (@randybias) posted OpenStack Design Summit 2011 – Schedule on 4/25/2011 to the CloudScaling blog:

      image There has been a little bit of confusion on the schedule for the upcoming OpenStack Design Summit.  It is running Tuesday through Friday this week in Santa Clara.  The complete schedule can be found at this link.  You can register here, although there is now apparently a wait list.

      image I will be one of the panelists on the Governance Policy Panel Tuesday at 4pm.  This is a good opportunity to get involved in the discussion around how OpenStack should be organized.

      For those who are want to connect with me while there, I recommend coordinating via Twitter (@randybias).


      Joe Arnold posted Cloud FAQs: OpenStack Storage (Swift) – Basics Q & A to the CloudScaling blog on 4/21/2011:

      image We get a lot of inquiries from clients, potential clients, friends of the family, and strangers, particularly about cloud computing and Infrastructure-as-a-Service (IaaS).  Recently, we had someone ask us a series of questions about OpenStack Storage (Swift).  Given that the OpenStack Design Summit is next week, we thought it would make more sense to answer these questions through the blog for others who might have similar questions.

      imageSome of this will seem self-evident to those in the OpenStack community, but for those outside, I think they represent a fairly common set of questions that folks ask when trying to understand where Swift is at.

      Q: Are there commercial installations of Swift ? Any changes to the code in those deployments?

      A: Outside of Rackspace, we know of three additional commercial deployments: Internap’s XIPCloud, KT’s ucloud and Nephoscale. We are aware of a number of other folks working on deployments, but can’t name names. Altogether, these are some very large deployments outside of Rackspace that are running Swift in production. The core of Swift has been battle-tested not just by Rackspace, but by these other service providers as well.

      Swift provides the core functionality of the object storage system. There are many systems that need to be designed and built around the core of OpenStack. This includes:

      • network and load balancing architecture
      • authentication/user management systems
      • billing
      • portal development
      • customer support tools
      • installation tools
      • operations tools/processes
      • hardware selection

      These do not include the many configuration decisions that depend on cluster configuration. While we are running (and we believe the other deployments are as well) mainline versions of Swift, there is still much to build for a commercial install of Swift.

      Q: Is Swift deployed at Rackspace ?

      A: Yes! Swift wasn’t the original implementation of their Rackspace CloudFiles product. They implemented a more robust and more scaleable solution and began running it in production around the same time as the OpenStack release in July.

      Q: Conversely, does Swift have all the code that is running at Rackspace or are there important parts that Rackspace runs, that are not in the Swift code ?

      A: As mentioned above, much of the system is context-specific and isn’t fully open-sourced.

      Q: When one is implementing an object store with Swift, any limitations or ”gotchas” that one should be aware of ?

      A: There are many. We’re constantly learning about how our customers’ clusters behave. Nothing is going to teach us how these clusters operate quite like having a cluster that’s in production serving real customers.

      On the whole, Swift behaves well. When properly configured, the zone architecture delivers exceptional durability of data and configuring a separate ‘front-end’ tier for the proxy and authentication services ensures scale-out for incoming API requests.

      Early in a client deployment we went into pre-production (closed BETA) without monitoring and a server had failed without noticing it. There was no service interruption and Swift dutifully replicated data across to other nodes to keep 3 copies of data in place. We finally noticed when peak throughput numbers weren’t quite as high as they were previously. This really points out the robustness of the Swift architecture.

      Q: From your experience, what are the top 4 or 5 lessons learned?

      A: There are many lessons we’ve learned along the way:

      1. Develop a repeatable deployment process early on. Misconfigured nodes will disrupt the normal operations of the cluster. Have a strong DevOps team in place to develop the software to manage the install & configuration of the cluster.
      2. Have deep knowledge on the inter-workings of the cluster. The documentation is good and the code is very well written and understandable. Spend time to get to know the internals of how the system is supposed to behave based on your configuration. For Cloudscaling, this deep knowledge has made it much easier for us to deal with issues in product, fix bugs that we come across and make the enhancements/integrations that are needed to get our customers online.
      3. Share your questions or comments about Swift or other OpenStack projects. We’re strong supporters of the OpenStack community, and we’d love to hear what you’re working on.
      4. Assemble a cross-functional team as there are many hats that are needed for a successful standup. Data center technicians to help plan the power/cooling needed at the DC, networking experts to help design and plan out the network, a great software development team to write the integrations needed and fix issues related to the software systems of the cluster, Swift is built around common unix tools and folks who are good systems administrator skills can really help tune a running system.

      _________________

      Have more questions? Send us an email: info@cloudscaling.com.


      <Return to section navigation list> 

      0 comments: