Saturday, September 24, 2011

Windows Azure and Cloud Computing Posts for 9/22/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

Note: OakLeaf posts were delayed due to mishaps during trial installations of virtual Windows 8 Developer Preview machines with Hyper-V as described in my Unable to Connect to Windows 8 Client or Server Virtualized with Windows 2008 R2 Hyper-V post of 9/22/2011 (updated 9/24).


Azure Blob, Drive, Table and Queue Services

Steve Marx (@smarx) described Extension Methods for the August 2011 Windows Azure Storage Features in a 9/23/2011 post:

imageSome great new features were added to Windows Azure storage and announced at the BUILD conference last week. I’ve added extension methods to make it easier to use this new functionality to my WazStorageExtensions project (on GitHub and NuGet). The new methods include (from the README):

  • Updating queue message visibility timeouts and content (storage team blog post)
  • Upsert and server-side query projection (storage team blog post)
  • A convenience method to initialize storage by creating containers, tables, and queues in a single call

imageThe GitHub project has examples of the new functionality, but here’s a quick teaser for the storage initialization convenience method, upsert, and server-side query projection (new stuff highlighted):

static void Main(string[] args)
{
    var account = CloudStorageAccount.Parse(args[0]);
    account.Ensure(tables: new[] { "temptable" });
    var ctx = account.CreateCloudTableClient().GetDataServiceContext2011();
    var entity = new MyEntity(args[1], args[2]);
    ctx.AttachTo("temptable", entity, null);
    ctx.UpdateObject(entity);
    // Does "InsertOrReplace." Drop the SaveChangesOptions argument to get "InsertOrMerge."
    ctx.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);

    // Server-side projection! "Created" property is not returned.
    Console.WriteLine(
        (from e in account.CreateCloudTableClient().GetDataServiceContext2011()
                .CreateQuery<MyEntity>("temptable")
            where e.PartitionKey == string.Empty && e.RowKey == args[1]
            select new { e.Value }).Single().Value);
}

The new storage features are great, and until there’s support baked into the .NET storage client library, I hope these extension methods make it easier for you to start using them.


<Return to section navigation list>

SQL Azure Database and Reporting

Erik Ejlskov Jensen (@ErikEJ) reported SQL Server Compact Toolbox available for Visual Studio 11 in a 9/22/2011 post:

imageVisual Studio 11 Developer Preview is now available for testing. As one of the first third party add-ins, a build of the SQL Server Compact Toolbox version 2.4 that supports this Visual Studio Preview version is available via Extension Manager or in the Visual Studio Gallery.

image

In order to add support for Visual Studio version 11 in an existing add-in, all you need to do is modify the source.extension.vsixmanifest file as shown below:

    <SupportedProducts>
<VisualStudio Version="10.0">
<Edition>Pro</Edition>
</VisualStudio>
<VisualStudio Version="11.0">
<Edition>Pro</Edition>
</VisualStudio>
</SupportedProducts>

The result of this change is that the add-in can now be installed for several versions of Visual Studio.

image

I have had to make some changes, as the Toolbox currently depends on SQL Server Compact 3.5 SP2 to store it’s connections, and only SQL Server Compact 4.0 is included with Visual Studio 11. In the Developer Preview the version of SQL Server Compact included is the 4.0 RTM version, so no changes there for now.

To detect which version of Visual Studio you are running, you can use the following code in your Package.cs class:

public Version VisualStudioVersion
{
get
{
var dte = this.GetServiceHelper(typeof(EnvDTE.DTE)) as EnvDTE.DTE;
string root = dte.RegistryRoot;

if (root.Contains("10.0"))
{
return new Version(10, 0);
}
else if (root.Contains("11.0"))
{
return new Version(11, 0);
}
else
{
return new Version(0, 0);
}
}
set
{
this.VisualStudioVersion = value;
}
}

I am currently not bringing forward any 4.0 connections defined in the VS 2010 edition of the add-in. Please let me know if a feature to import these connections to the VS 11 Server Explorer would be useful.

Also, would it be of interest to be able to manage 3.5 databases in VS 11, even though they are not supported in Server Explorer?

As always, please provide any feedback in the comments or via the Codeplex issue tracker.


Avkash Chauhan explained How to Backup SQL Azure Database to Windows Azure Blob Storage directly from your own machine in a 9/20/2011 post:

imageUsing Import/Export Service for SQL Azure CTP you can directly import or export between a SQL Azure database and a customer Windows Azure BLOB storage account.

imageAs you may know the Windows Azure Storage cost ~$0.10 per GB so if you decided to use Windows Azure Storage to back up your SQL Azure DB that could be the best option. If you have SQL Azure and Windows Azure Storage in same Data Center, which would be best setup for using Azure Storage as SQL Azure backup scenario.

The service use a very much familiar BACPAC file format on both side during import and export.

The import/export service provides some public REST endpoints for the submission of requests. So to start with, you can directly backup your SQL Azure DB to Azure Storage using tool name “DAC SQL Azure Import Export Service Client V 1.2” located as below:

http://sqldacexamples.codeplex.com/releases/view/72388

While trying this tool, I found the following blog very useful and the videos were to the point to get my work done:

Here is the command line details for this tool:

Microsoft (R) DAC Import Export Sample version 1.0.1.0
Copyright (C) Microsoft Corporation. All rights reserved.

Command Line Parameters:
-H[elp] | -? Show this help text.
-X[export] Perform an export action.
-I[mport] Perform an import action.
-D[atabase] <database> Database name to perform the action on.
-F[ile] <filename> Name of the backup file.
-S[erver] <servername> SQL Server Name and instance.
-E Use Windows authentication
(not valid for SQL Azure)
-U[ser] User name for SQL authentication.
-P[assword] Password for SQL authentication.
-DROP Drop a database and remove the DAC registration.(*2)
-EDITION <business|web> SQL Azure edition to use during database creation.(*4)
-SIZE <1> SQL Azure database size in GB.(*4)
-N Encrypt Connection using SSL.
-T Force TrustServerCertificate(*6)
-EXTRACT Extract database schema only.
-DEPLOY Deploy schema only to database.


Usage:
Export a database to a bacpac using Windows Auth:
DacImportExportCli -S myserver -E -B -D northwind -F northwind.bacpac -X

Import a bacpac to a database using Windows Auth:
DacImportExportCli -S myserver -E -D nw_restored -F northwind.bacpac -I

Import a bacpac to SQL Azure with options:
DacImportExportCli -S myazure -U azureuser -P azurepwd -D nw_restored -F northwind.bacpac -I -EDITION web -SIZE 5 -N

Drop both database and DAC registration:
DacImportExportCli -S myserver -E -D nw_restored –DROP

More info:

  • On import the database must not exist. A new database is always created. SQL Azure uses system edition defaults if not set.
  • DROP is very aggressive. It will attempt to remove a database that is not registered as a DAC. It will also remove DAC registration missing a database. Use -D to specify the database.
  • Databases can use this tool only if they contain SQL 2008 R2 DAC supported objects and types.
  • Choose the SQL Azure options desired, this may impact billing. (Only valid against SQL Azure)

<Return to section navigation list>

MarketPlace DataMarket and OData

Greg Duncan posted Some Resources for OData and Azure [BUILD 2011] to the OData Primer on 9/14/2011 (missed when published):

Writing...Data Services - Some Resources for OData and Azure

imageWith the rapidly growing popularity of the Windows Azure Platform and Microsoft’s Cloud Services offerings, I thought it a good idea to put up a quick post about where you can find out about publishing OData services to the Azure cloud, especially since it’s so easy to do with the Windows Azure Tools for Visual Studio. Anyway, the following is some of the more complete and useful content available out there that shows you how to publish WCF Data Services to Azure: ...

<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Sean Deuby (@shorinsean) claimed “How Microsoft IT runs one of the world’s largest federation services” in a deck for his Federation at Microsoft article of 9/24/2011 for the Windows IT Pro blog:

If you’ve been reading this column for a while, you’re realizing that sooner or later you’ll need to implement some kind of federation service in your identity infrastructure. This service will allow you to provide single sign-on (SSO) to cloud-based services—both on-premises and in the public cloud—for your enterprise users, using their enterprise credentials. If you don’t provide SSO, your users will be forced to find their own ways of using these cloud service providers, and probably not in a way you’d prefer. In this column, I’ll review the production federation service of a well-known enterprise: Microsoft.

imageTo find out how Microsoft runs its federation service, I sat down with my friend and ex-Directory Services MVP, Laura Hunter, at the Cloud Identity Summit. Laura is an ex-MVP because she accepted a position as identity and access management architect for Microsoft IT, specifically for federation services. Besides her principal responsibilities with the federation infrastructure, she speaks at various conferences to show IT pros how federation is managed in what's probably the largest production federation environment in the world.

Federation’s History at Microsoft

Microsoft started “dogfooding” federation with the release of Active Directory Federation Services (AD FS) 1.0 at the time of Windows Server 2003 R2. The company’s original reason for implementing AD FS wasn’t to provide access to what we now think of as cloud applications (remember, this was around 2005), but to make it easier for its employees to access Microsoft’s external providers. The first federated trusts for the company were payroll, HR, employee benefits, and the Microsoft company store. Establishing these trusts made it possible for employees to use their enterprise Microsoft credentials to access the provider’s resources.

In 2010, Microsoft IT’s upgrade of its federation service to AD FS 2.0 with its support of the widely used SAML protocol—coupled with the rise of cloud computing—resulted in an explosion of use for this service. Microsoft developers began creating new applications, and re-architecting existing applications, to use claims-based authentication instead of traditional integrated Windows authentication. Laura estimates that Microsoft IT is currently managing approximately 900 relying party trusts, though not all of them are for production services. (There might be as many as six trusts needed to support a production service at each stage of its lifecycle, such as proof of concept, development, customer test, integration test, and so on.)

Perhaps surprisingly, a large number of these applications are on premises within the Microsoft network. An important feature of claims-aware applications is that, to the applications, the traditional corporate firewall (the “flaming brick wall,” as security expert Gunnar Peterson puts it) doesn’t exist because all the application’s traffic goes over always-open ports 80 (HTTP) or 443 (HTTPS). As a result, claims-aware applications are very portable and are equally comfortable inside or outside that corporate firewall.

Microsoft’s IAM Environment

Figure 1 shows an overview of Microsoft IT’s identity and access management (IAM) environment. It consists of three major areas: Microsoft’s internal network, called CorpNet; its extranet (DMZ), for collaboration with partners; and cloud services. Let’s look at CorpNet first. Naturally, Microsoft uses all the identity tools at its disposal, so it uses Forefront Identity Manager (FIM) to integrate the company’s HR database into the product’s metaverse. This metaverse is “upstream” of its AD environment and feeds select HR data into it. As you might suspect, a company like Microsoft with tens of thousands of developers has a pretty complicated AD configuration.

Deuby Fig 1 MSIT Identity Environment_0
Figure 1: The Microsoft IT Identity Environment

It’s important to remember than when the phrase Log on using your enterprise credentials is casually tossed around in federation scenarios, this authentication process is often a lot more complicated than it sounds. Many companies don’t have a single domain, or forest, that contains everyone’s user accounts. For a variety of reasons, user accounts might be scattered across multiple forests. Microsoft, for example, has eight different AD production forests comprising 18 production domains, any one of which might contain a user’s corporate-sanctioned credentials. (Of course there are many test and development forests with separate, isolated credentials.) Because it’s not cost- or labor-intensive to provide separate federation services for each credential store, Microsoft has configured its major account forests to use forest trusts with selective authentication where required, to allow users to access resources—like federation—across the forests. Along with the multi-forest AD environment, IT’s production AD FS service interacts with other claims sources (e.g., physical security), authorization services, and more than 2500 IT-supported line-of-business (LOB) applications.

Microsoft’s extranet environment exists to allow Microsoft employees to sponsor credentials for partners and vendors for collaboration purposes, and to allow these partners to access resources such as SharePoint. An AD FS proxy is another key component of the extranet, which I’ll review in more detail later.

image72232222222Finally, Microsoft’s cloud computing environment is an enormous and vitally important facet of Microsoft’s computing story. This environment falls into three categories. Office 365—Microsoft’s Software as a Service (SaaS) version of its most popular desktop and server applications—is used by Microsoft internally (in addition to the service’s external customers) and uses the Dirsync service to synchronize identities between corporate Office 365 users and the service. Windows Azure is Microsoft’s Platform as a Service (PaaS) offering. PaaS provides a platform for developing SaaS applications. It was the first Microsoft cloud computing product for the simple reason that Microsoft’s own developers needed a platform for creating SaaS versions of the company’s enterprise software. As you might expect, Windows Azure is very heavily used at Microsoft, and AD FS—along with the Windows Azure AppFabric Access Control Service (ACS)—facilitates this. Finally, Microsoft uses a wide variety of third-party cloud computing service providers and partners (such as the previously mentioned payroll service). [Emphasis added.]

Federation Is Mission Critical

Even though federation is a new service in the IT world, don’t make the mistake of thinking it isn’t an important service. One way to think of a federation service is as a gateway between the Kerberos world and the claims-based world. Claims-based authentication uses claims wrapped in a digitally signed token. The standard for enterprise authentication is AD, of course, and it uses Kerberos tickets. Making enterprise authentication work with claims-aware applications means that tickets must be transformed to tokens, and vice versa. This transformation is the main function of the Security Token Service (STS) component of a federation service such as AD FS. This means that as companies begin to use claims-aware applications both externally and internally, the federation service quickly becomes part of the mission-critical infrastructure. Just count the number of arrows leading to and from AD FS and its proxy service in Figure 1 to see how critical it is to Microsoft!

Laura’s advice to companies planning a federation service (that should be most of you) is to look at your requirements, because those requirements will determine what kind of federation architecture you need. She says, “At the end of the day, federation is pretty simple. It’s about my people accessing your stuff, or your people accessing my stuff, or my people accessing a provider’s stuff. Who are your customers? Who are you trying to authenticate to what applications?” An enterprise that wants to authenticate its users to SaaS apps should probably have an on-premises federation service. An ISV that wants to make it easy for users to authenticate to a cloud-based application should probably host its federation service in the cloud, too.

Laura likes to joke, “If you’re having trouble setting up AD FS, it’s either a problem with PKI or a typo.” On a more serious note, she recommends that you build your federation service with the end state in mind—in other words, plan for high availability from the beginning. Based on my AD experience, I’d suggest that you build in lifecycle management for your federated trusts from the start, just like you should be doing lifecycle management for AD users, groups, and computers.

Don’t forget to also take the requirements for an AD FS proxy into account. You’ll want an AD FS proxy (an AD FS installation option) as part of your architecture in addition to the core AD FS service. Why do you need a proxy? Unlike the AD FS service itself, the proxy doesn’t have to be joined to a domain; it’s usually used in a DMZ to forward external authentication requests to the AD FS service. In Microsoft’s case, it’s used to allow employees outside the corporate network to use claims-aware applications. It also allows extranet partners to use some of these applications. Like the core AD FS service, it should also be configured for high availability.

Federation isn’t a “nice to have” add-on. It will quickly become a mandatory high-availability service of your IT infrastructure. Leading by example, Microsoft IT demonstrates federation’s importance. To quote Microsoft Technical Fellow John Shewchuk, “Identity is the glue that binds federated IT together.” And a federation service, whether it’s maintained on premises or hosted in the cloud, is the glue that binds your AD and claims-aware applications together to help create a federated IT.

Related Content:


Abishek Lal listed Pointers to Service Bus information in a 9/22/2011 post:

imageFollowing is some collateral to get you started and I am happy to answer any questions/concerns.

1) Release announcement: http://blogs.msdn.com/b/windowsazure/archive/2011/09/16/the-service-bus-september-2011-release.aspx

image722322222222) MSDN Docs (API Reference/Tutorial etc.): http://msdn.microsoft.com/sb

3) SDK & Samples: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=27421

4) Blog post describing CTP to PROD API changes: http://rickgaribay.net/archive/2011/09/14/azure-appfabric-service-bus-brokered-messaging-ga-amp-rude-ctp.aspx

5) Videos from //build conference:

a. Building loosely-coupled apps with Windows Azure Service Bus Topics and Queues: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-862T

b. Service Bus Queues and Topics Advanced: http://channel9.msdn.com/posts/ServiceBusTopicsAndQueues

c. Securing Service Bus with ACS: http://channel9.msdn.com/posts/Securing-Service-Bus-with-ACS

Happy messaging!


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Brent Stineman (@BrentCodeMonkey) explained Leveraging the RoleEntryPoint (Year of Azure – Week 12) in a 9/24/2011 post:

imageSo the last two weeks have been fairly “code lite”, my apologies. Work has had me fairly slammed the last 6 weeks or so and it was finally catching up with me. But I took this week off (aka minimal conference calls/meetings). But today my phone is off, and I have NOTHING on my calendar. So I finally wanted to turn my hand to a more technical blog post.

imageI’ve been doing a fair amount of training of late, and something that usually comes up, and I make a point of diving into fairly well, its how to be aware of changes in our environment. In the pre 1.3 SDK days, the default role entry points always had a method in them to handle configuration changes. But since that has gone away and we have the ability to use startup scripts, not as much attention gets paid to these things. So today we’ll review it and call out a few examples.

Yeah for code samples!

Methods and Events

There are two groups of hooks that allow us to respond to events or changes in role state/status methods declared in the the RoleEntryPoint class and events in the RoleEnvironment class.But before we dive into these two, we should understand the lifecycle of a role instance.

According to an excellent post by the azure team from earlier this year, the sequence of events in role instances we can respond too, OnStart, Changing, Changed, Stopping, and OnStop. I’ll add two items to this, Run, which follows OnStart, and StatusCheck which is used by the Azure agent to determine if the instance is “ready” to receive requests from the load balancer, or is “busy”.

So lets walk through these one by one.

OnStart is where it all begins. When a role instance is started the Azure Agent will reflect over the role’s primary assembly and upon finding a class that inherits from RoleEntryPoint, it will call that classe’s OnStart method. Now by default, that method will usually look like this:

public override bool OnStart()
{
// Set the maximum number of concurrent connections
ServicePointManager.DefaultConnectionLimit = 12;

// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

return base.OnStart();
}

And if we created a WorkerRole, we’ll also have a default Run method that looks like this:

public override void Run()
{
// This is a sample worker implementation. Replace with your logic.
Trace.WriteLine("WorkerRole1 entry point called", "Information");

while (true)
{
Thread.Sleep(10000);
Trace.WriteLine("Working", "Information");

}
}

The Run will be called after the OnStart. But here is our first curveball, we can add a Run method to a webrole and it will be called by the Azure Agent.

Next up, we have the OnStop method.

public override void OnStop()
{
try
{
// Add code here that runs when the role instance is to be stopped
}
catch (Exception e)
{
Trace.WriteLine("Exception during OnStop: " + e.ToString());
// Take other action as needed.
}
}

This method is a great place to try and allow our instance to shut down slowly and gracefully. The catch is that we can’t take more than 30 seconds or the instance will be shut down. So anything you’re going to do, we’ll need to do quickly.

We do have another opportunity to start handling shut down. the RoleEnvironment.Stopping event. This is called once the instance has been taken out of the load balancer, but isn’t called when the guest VM is rebooted. Because this is an event, we have to create not just the event handler, but also wire it up:

RoleEnvironment.Stopping += RoleEnvironmentStopping;

private void RoleEnvironmentStopping(object sender, RoleEnvironmentStoppingEventArgs e)
{
// Add code that is run when the role instance is being stopped
}

Now related to the load balancer, and another event we can handle is the StatusCheck. This can be used to tell the Agent if the role instance should or shouldn’t get requests from the load balancer.

RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;

// Use the busy object to indicate that the status
// of the role instance must be Busy
private volatile bool busy = false;

private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
if (this.busy)
e.SetBusy();
}

But we’re not done yet…

Handling Environment Changes

Now there are two more events we can handle, Changing and Changed. These events are ideal for handling changes to the service configuration. We can optionally decide to restart our role instance by setting the event’s RoleEnvironmentChangingEventArgs.Cancel property to true during the Changing event.

RoleEnvironment.Changing += RoleEnvironmentChanging;

private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
}

RoleEnvironment.Changed += RoleEnvironmentChanged;

private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
}

The real value in both these is detecting and handling changes. If we just want to iterate through changes, we can put in a code block like this:

// Get the list of configuration changes
var settingChanges = e.Changes.OfType<RoleEnvironmentConfigurationSettingChange>();

foreach (var settingChange in settingChanges)
{
var message = "Setting: " + settingChange.ConfigurationSettingName;
Trace.WriteLine(message, "Information");
}

If you wanted to only handle Topology changes (say a role instance being added or removed), you would use a snippet like this:

// topology changes
var changes = from ch in e.Changes.OfType<RoleEnvironmentTopologyChange>()
where ch.RoleName == RoleEnvironment.CurrentRoleInstance.Role.Name
select ch;
if (changes.Any())
{
// Topology change occurred in the current role
}
else
{
// Topology change occurred in a different role
}

Lastly, there are times where you may only be updating a configuration setting, if you want to test for this, then we’d use a snippet like this:

if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
e.Cancel = true; // don't recycle the role
}

Discovery of the RoleEntryPoint

This all said, there are two common questions that come up: how does Azure find the entry point and can I set up a common entry point to be used by multiple roles? I’ll address the later first.

The easiest way I’ve found to create a shared RoleEntryPoint is to set it up in its own class library, then add a reference to that library to all role instances. In each role instance, change their default RoleEntryPoints to inherit from the shared class. Simple enough to set up (takes less than 5 minutes) and easy for anyone used to doing object oriented programming to wrap their heads around.

The first question, about discovery is a bit more complicated. If you read through that entire thread, you’ll find references to a “targets” files and the cloud service project. Prior to the new 1.5 SDK, this was true. But 1.5 introduced an update to the Web and Worker role schemas, a new element that we can use to specify the assembly to be searched for the RoleEntryPoint, NetFxEntryPoint. Using this, you can point directly to an assembly that contains the RoleEntryPoint.

Both approaches work, so use the one that best fits your needs.

And now we exit…

Nearly everything I’ve put in this post is available in the MSDN files. So I haven’t really built anything new here. But what I have done is create a new Cloud Service project that contains all the methods and events and even demonstrates the inheritance approach for a shared entry point. It’s a nice reference project that you can copy/paste from when you need examples without having to hunt through MSDN. You can download it from here.

That’s all for this week. So until next time!

The answer to the former is the easiest, the Azure Agent will

So I’ve been doing a fair amount of Windows Azure training of late, and something I always make a point of spending time on is the Methods and Events of the RoleEntryPoint class. While this class isn’t as important as it was in the pre 1.3 SDK days when we didn’t have start-up scripts running via the service definition, its still something I believe everyone needs to understand.


Maarten Balliauw (@maartenballiauw) described NuGet push... to Windows Azure in a 9/23/2011 post:

imageWhen looking at how people like to deploy their applications to a cloud environment, a large faction seems to prefer being able to use their source control system as a source for their production deployment. While interesting, I see a lot of problems there: your source code may not run immediately and probably has to be compiled. You don’t want to maintain compiled assemblies in source control, right? Also, maybe some QA process is in place where a deployment can only occur after approval. Why not use source control for what it’s there for: source control? And how about using a NuGet repository as the source for our deployment? Meet the Windows Azure NuGetRole.

Disclaimer/Warning: this is demo material and should probably not be used for real-life deployments without making it bullet proof!

imageDownload the sample code: NuGetRole.zip (262.22 kb)

How to use it

If you compile the source code (download), you have X steps left in getting your NuGetRole running on Windows Azure:

  • Specifying the package source to use
  • Add some packages to the package source feed (which you can easily host on MyGet)
  • Deploy to Windows Azure

When all these steps have been taken care of, the NuGetRole will download all latest package versions from the package source specified in ServiceConfiguration.cscfg:

1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceConfiguration serviceName="NuGetRole.Azure" 3 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" 4 osFamily="1" 5 osVersion="*"> 6 <Role name="NuGetRole.Web"> 7 <Instances count="1" /> 8 <ConfigurationSettings> 9 <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> 10 <Setting name="PackageSource" value="http://www.myget.org/F/nugetrole/" /> 11 </ConfigurationSettings> 12 </Role> 13 </ServiceConfiguration>

Packages you publish should only contain a content and/or lib folder. Other package contents will currently be ignored by the NuGetRole. If you want to add some web content like a default page to your role, simply publish the following package:

NuGet Package Explorer MyGet NuGet NuGetRole Azure

Just push, and watch your Windows Azure web role farm update their contents. Or have your build server push a NuGet package containing your application and have your server farm update itself. Whatever pleases you.

How it works

What I did was create a fairly empty Windows Azure project (download). In this project, one Web role exists. This web role consists of nothing but a Web.config file and a WebRole.cs class which looks like the following:

1 public class WebRole : RoleEntryPoint 2 { 3 private bool _isSynchronizing; 4 private PackageSynchronizer _packageSynchronizer = null; 5 6 public override bool OnStart() 7 { 8 var localPath = Path.Combine(Environment.GetEnvironmentVariable("RdRoleRoot") + "\\approot"); 9 10 _packageSynchronizer = new PackageSynchronizer( 11 new Uri(RoleEnvironment.GetConfigurationSettingValue("PackageSource")), localPath); 12 13 _packageSynchronizer.SynchronizationStarted += sender => _isSynchronizing = true; 14 _packageSynchronizer.SynchronizationCompleted += sender => _isSynchronizing = false; 15 16 RoleEnvironment.StatusCheck += (sender, args) => 17 { 18 if (_isSynchronizing) 19 { 20 args.SetBusy(); 21 } 22 }; 23 24 return base.OnStart(); 25 } 26 27 public override void Run() 28 { 29 _packageSynchronizer.SynchronizeForever(TimeSpan.FromSeconds(30)); 30 31 base.Run(); 32 } 33 }

The above code is essentially wiring some configuration values like the local web root and the NuGet package source to use to a second class in this project: the PackageSynchronizer. This class simply checks the specified NuGet package source every few minutes, checks for the latest package versions and if required, updates content and bin files. Each synchronization run does the following:

1 public void SynchronizeOnce() 2 { 3 var packages = _packageRepository.GetPackages() 4 .Where(p => p.IsLatestVersion == true).ToList(); 5 6 var touchedFiles = new List<string>(); 7 8 // Deploy new content 9 foreach (var package in packages) 10 { 11 var packageHash = package.GetHash(); 12 var packageFiles = package.GetFiles(); 13 foreach (var packageFile in packageFiles) 14 { 15 // Keep filename 16 var packageFileName = packageFile.Path.Replace("content\\", "").Replace("lib\\", "bin\\"); 17 18 // Mark file as touched 19 touchedFiles.Add(packageFileName); 20 21 // Do not overwrite content that has not been updated 22 if (!_packageFileHash.ContainsKey(packageFileName) || _packageFileHash[packageFileName] != packageHash) 23 { 24 _packageFileHash[packageFileName] = packageHash; 25 26 Deploy(packageFile.GetStream(), packageFileName); 27 } 28 } 29 30 // Remove obsolete content 31 var obsoleteFiles = _packageFileHash.Keys.Except(touchedFiles).ToList(); 32 foreach (var obsoletePath in obsoleteFiles) 33 { 34 _packageFileHash.Remove(obsoletePath); 35 Undeploy(obsoletePath); 36 } 37 } 38 }

Or in human language:

  • The specified NuGet package source is checked for packages
  • Every package marked “IsLatest” is being downloaded and deployed onto the machine
  • Files that have not been used in the current synchronization step are deleted

This is probably not a bullet-proof solution, but I wanted to show you how easy it is to use NuGet not only as a package manager inside Visual Studio, but also from your code: NuGet is not just a package manager but in essence a package management protocol. Which you can easily extend.

One thing to note: I also made the Windows Azure load balancer ignore the role that’s updating itself. This means a roie instance that is synchronizing its contents will never be available in the load balancing pool so no traffic is sent to the role instance during an update.


Wade Wegner (@WadeWegner) announced Cloud Cover Episode 59 - Using the Windows Push Notification Service with Windows Azure on 9/23/2011:

imageJoin Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

In this episode, Nick Harris—Technical Evangelist for Windows Azure—joins Steve and Wade to discuss the Windows Azure Toolkit for Windows 8. Nick provides background on the Windows Push Notification Service, and shows how to go about building and running a Metro-style app that uses push notifications managed by services running in Windows Azure.

imageIn the news:

Open attached fileEpisode59WPNWindowsAzure_ch9.wmv

Jesus Rodriguez suggested that you Manage your IT Systems from your Mobile Device: Tellago Studios Announces Moesion in a 9/23/2011 post:

This week Tellago Studios announced Moesion: a cloud platform to manage your IT systems or cloud services directly from your smartphone or tablet.

Moesion is the result of our constant obsession to bring innovation and modern technologies to the world of enterprise software. In this case, we want to use the power of mobile and cloud computing to make IT Professionals more productive by accessing their IT systems and cloud services directly from their smartphone or tablet. Even better, we enable all those capabilities without the need of installing any on-premise server side infrastructure.

What Problem does Moesion Solve?

As an IT professional you spend a good portion of your day interacting with different systems that could be hosted on-premise or on the cloud. Traditionally, the management of those systems requires you to have some sort of VPN connection into your corporate network which, most likely, requires you to be in front of your computer.

Wouldn’t it be cool if you could quickly perform different management tasks on your IT systems directly from your smartphone or tablet?

Well….we have an APP for that J

Moesion?

imageMoesion is a cloud-based platform that enables the management of your IT systems, whether hosted on-premise or on the cloud, from your smartphone or tablet. We accomplish this by using a sophisticated, highly secured, real time middleware infrastructure that brokers the communication between your phone and a very small agent that sits on your specific servers. The following picture illustrates that concept.

What Can You Do With Moesion?
Detect and fix problems with your IT systems from your smartphone or tablet

Moesion includes a series of HTML5 applications that enable the management of some of the most popular IT systems in a visual manner directly from your mobile device. The current version focuses on the management of Windows Server infrastructure and includes the following IT systems:

  • Event Log
  • Windows Services
  • Windows Processes
  • File system
  • Users, Groups
  • Devices
  • Disks
  • Operating System Info
  • Hotfixes
  • Internet Information Services 6.0, 7.0,7.5
  • SQL Server 2005, 2008, 2008 R2
  • SharePoint Server 2007, 2010
  • BizTalk Server 2004, 2006, 2006 R2, 2009, 2010
  • Windows Azure [Emphasis added.]
  • Hundreds of scripts
Publish And Use Your Own Scripts

Moesion allows you to publish your own PowerShell and VBScripts and use them directly from your smartphone or tablet.

Get New Apps: An Application Store For New IT Systems Management Apps And Scripts

imageMoesion includes an application store on which you can find new applications and scripts to manage your IT infrastructure. In addition to the initial apps listed above, we are currently working on new apps to manage different products and technologies directly from your smartphone. Even better, you can tell us which applications you would like us to focus on.

Check out our immediate roadmap.

Who Did What? Noninvasive Tracking

Moesion provides a tracking infrastructure that traces the different actions executed in your IT infrastructure as well as the devices those actions were triggered from. This capability will allow you have an accurate visibility of the actions taken in your IT infrastructure. It is important to notice that Moesion DOES NOT KEEP TRACK OF ANY SENSITIVE INFORMATION about your network, servers or infrastructure in general.

Relax: Your Stuff Is Secure

imageMoesion uses both sophisticated transport and message security mechanisms to protect the communication between our cloud infrastructures and your servers. Additionally, we include an authorization layer on which you can control the access permissions to a specific server. Finally, Moesion includes a device provisioning layer on which you can control which specific devices can interact with your servers.

How To Get Your Moesion Running?

Moesion will be available is private beta mode. You can sign up for the private beta at our website. While you are at it, check out this cool video that shows you what you can do with Moesion.

Initial feedback
We’ve received an incredible reception by the press, customers, etc. Check out some of the news coverage of Moesion in our news center.

Jesus is a Microsoft BizTalk Server MVP.


Steve Marx (@smarx) described Some Updates to Waz-Cmd (Ruby Command-Line Tool for Windows Azure) in a 9/22/2011 post:

imageDedicated followers may remember waz-cmd, which I blogged about before. It’s a command-line tool written in Ruby that you can use to manage your Windows Azure applications and storage accounts. One of the reasons I wrote waz-cmd was so that I’d have a cross-platform tool for managing my Windows Azure applications from the command-line. (Just as I’m a big fan of using multiple programming languages, I also like to use several operating systems.)

imageTo that end, I’ve committed a few changes tonight and published version 0.4.1 of the waz-cmd Ruby gem. These include fixes for both Mac and Windows, and at this point, you should be able to simply run gem install waz-cmd on either of those platforms and start creating applications, deploying packages, etc. If you tried this before and ran into trouble, please install the latest version of the gem and give it another shot. Let me know if anything goes wrong.

For those of you looking for a fantastic Windows-only command-line experience, I recommend the Windows Azure PowerShell Cmdlets, with the new 2.0 release.


Brian Swan (@brian_swan) explained How to Get Diagnostics Info for Azure/PHP Applications–Part 2 in a 9/22/2011 post to the Window Azure’s Silver Lining blog:

imageIn part 1 of this two part series, I showed how you can use a configuration file (the diagnostics.wadcfg file) to configure Azure diagnostics. However, that approach requires that you know all the diagnostic information that you want to collect before you deploy your application. In this post, I’ll show how you can use the Windows Azure SDK for PHP API to configure diagnostic information after an application has been deployed to Windows Azure.

Recall from part 1 that the Windows Azure Diagnostics Monitor persists your diagnostics configuration in your Azure Blob storage (in a container called wad-control-container). One configuration file is stored for each role instance your are running (the name of each file is of the format <deployment_id>/<role_name>/<role_instance_name>). You could, of course, download the configuration files (they are XML files), edit them, and upload them to your Blob storage. The Diagnostics Monitor would, on it’s configured schedule, check for updates and apply them. However, it would be much more fun to write code to do this for us. So, from here, I’ll walk you through a PHP script that turns on the collection of 3 performance counters. I’ll provide the complete script at the end, and provide some guidance as to how you can turn on other diagnostics. (I’ll assume that you have read part 1 of this series.)

First, I’ll include the classes I’ll use:

require_once 'Microsoft/WindowsAzure/Storage/Blob.php'; 
require_once 'Microsoft/WindowsAzure/Diagnostics/Manager.php';
require_once 'Microsoft/WindowsAzure/Management/Client.php';

Next, I’ll define several constants I’ll use in my script:

// Define constants for using Blob and Client classes.
define("STORAGE_ACCOUNT_NAME", "your_storage_account_name");
define("STORAGE_ACCOUNT_KEY", "your_storage_account_key");
define("ROLE_NAME", "PhpOnAzure.Web");
define("SID", 'your_azure_subscription_id');
define("CERTIFICATE", 'path\to\your\management\certificate.pem');
define("PASSPHRASE", 'your_certificate_passphrase');
define("DNS_PREFIX", 'dns_prefix_for_your_hosted_service');
define("SLOT", 'production'); //production or staging

A couple notes about these constants:

Now I’ll use the Client, Blob, and Manager classes to get deployment information and current diagnostic configuration (from my blob storage). I’ll also define an array that contains the performance counters I want to collect:

// Get deployment
$client = new Microsoft_WindowsAzure_Management_Client(SID, CERTIFICATE, PASSPHRASE);
$deployment = $client->getDeploymentBySlot(DNS_PREFIX, SLOT);
 
// Create diagnostics manager
$blob = new Microsoft_WindowsAzure_Storage_Blob('blob.core.windows.net', STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_KEY);
$manager = new Microsoft_WindowsAzure_Diagnostics_Manager($blob);
 
// Specify performance counters to collect.
$counters = array('\Processor(_Total)\% Processor Time', '\Memory\Available Mbytes', '\TCPv4\Connections Established' );

And finally, I’ll loop through the role instances for my deployment and turn on the performance counters I defined above.

// Create and set a configuration for each Web role.
foreach($deployment->roleinstancelist as $index => $value) {
    if($value['rolename'] == ROLE_NAME) {
        $role_id = $deployment->privateid."/".ROLE_NAME."/".ROLE_NAME."_IN_".$index;
        $configuration = $manager->getConfigurationForRoleInstance($role_id); 
        foreach($counters as $c) { 
            $configuration->DataSources->PerformanceCounters->addSubscription($c, "60");
        } 
        $configuration->DataSources->OverallQuotaInMB = 10;
        $configuration->DataSources->PerformanceCounters->BufferQuotaInMB = 10;
        $configuration->DataSources->PerformanceCounters->ScheduledTransferPeriodInMinutes = 1; 
        $manager->setConfigurationForRoleInstance($role_id,$configuration);
    }
}

A couple of notes about the code above:

  • This code assumes that you might have multiple roles for a deployment (e.g. a worker role). It will only turn on performance counters for the specified ROLE_NAME.
  • The getConfigurationForRoleInstance method on the Manager class requires a “role Id” parameter. The role Id is of this format: <deployment id>/<role name>/<role name>_IN_<instance index>.
  • The DataSources property on the Configuration class has several properties that you can set: overallquotainmb, logs, diagnosticinfrastructurelogs, performancecounters, windowseventlog, and directories.

That’s it. Within a few minutes after executing that script you should begin seeing performance counter data written to the WADPerformanceCountersTable table in your Windows Azure Table storage.

Collecting other diagnostic information is similar. However, you’ll need to look more closely at these classes in the SDK to figure out the details:

  • ConfigurationDiagnosticsInfrasctuctureLogs
  • ConfigurationDirectories
  • ConfigurationLogs
  • ConfigurationWindowsEventLog

I’d be happy to dive into the details of using any of those classes…just post a comment!

Brian’s post includes the complete script.


The Windows Azure Team posted Roozz.com Powers Online Game and Application Rentals Worldwide with Windows Azure on 9/22/2011:

serioussamscreenshotGame hero Serious Sam is among the first ambassadors for Danish start-up Roozz.com and its software, which allows users to run games and applications in any PC browser.

Designed to enable businesses to avoid paying for expensive software they may only use occasionally, Roozz is implemented as a browser plugin with integrated payment and copyright protection.

imageRoozz runs on Windows Azure, which enables the company to:

  • easily adjust capacity up and down depending on the number of users;
  • operate smoothly 99.9 % of the time;
  • ensure that games and software applications run perfectly, no matter where the user is located;
  • save on hardware and server investments; and,
  • handle money transactions securely.

image“Windows Azure is extremely stable and draws on six data centers and a content delivery network with 24 data centers worldwide. This means that the games and programs we provide run at the exact same speed whether you’re in Mexico City, Dublin or Singapore,” explains Roozz.com Sales and Marketing Manager Jesper Wendel Thomsen.

“In the first half of 2011, the Roozz plugin was downloaded 140,000 times and we can see from our statistics that the number of end users is growing rapidly month by month,” adds Thomsen.

Click here to learn more about the Roozz plugin. Click here to learn more about publishing your software on Roozz.


Marianne McGee described two health-industry case studies in her Cloud Rx whitepaper for InformationWeek::Analytics of 9/22/2011:

Salary Survey 2010: HealthcareJoni Mitchell once complained, "I've looked at clouds from both sides now. ... I really don't know clouds at all." No doubt many healthcare IT executives and practitioners share the same frustration as they try to determine whether to move at least some of their applications to a cloud service.

Not every application is ripe for the cloud, but these two case studies offer some insights into what does work.

While an abstract discussion of the advantages and disadvantages of software as a service has value, taking a real-world look at how hospitals and practices have made the move is even more useful. With that notion in mind, here are two case studies.

Download

About the Author

Marianne Kolbasuk McGee has been reporting and writing about IT for more than 20 years. She joined InformationWeek in 1992 and covers a variety of issues, including IT management, careers, skill and salary trends and H-1B visas. McGee also closely follows health care IT issues, including the federal government’s stimulus spending programs for expanding the adoption of electronic medical records systems.


Avkash Chauhan described Whats new in Windows Azure SDK 1.5 - Using CSUPLOAD tool to upload service certificates for your Windows Azure Application in a 9/21/2011 post:

imageAs you can see below is the certificate, already installed in my machine and I want to upload as my Service Certificate to Azure Portal. Before the release of Windows Azure SDK you would have to upload certificate directly to portal however using Windows Azure SDK 1.5 based CSUPLOAD tool, you can upload the certificate directly.

The command line is as below:

>csupload add-servicecertificate

-Connection "SubscriptionID=<YOUR_SUBSCRIPTION_GUID>; CertificateThumbprint=<Management_Certificate_Thumbprint>"

-HostedServiceName "<Your_Hosted_Service_Name>"

-Thumbprint "<Service_Certificate_Thumbprint>"

Example:

Here is the certificate which I would like to upload in my service certificate section:

Here is my Windows Azure Service Certificate list look like before uploading:

Here is the actual command output:

C:\Program Files\Windows Azure SDK\v1.5\bin>csupload add-servicecertificate -Connection "SubscriptionID=<YOUR_SUBSCRIPTION_ID>;CertificateThumbprint=A77B40E35556DFDB09C3B246453A548B2D7B9444" -HostedServiceName "avkashchauhan" -Thumbprint "673eb3d86c3fb01bc58c06915c50cb01cb879a9b"


Windows(R) Azure(TM) Upload Tool version 1.5.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

arg[0]="add-servicecertificate"
arg[0]={ 97, 100, 100, 45, 115, 101, 114, 118, 105, 99, 101, 99, 101, 114, 116,
105, 102, 105, 99, 97, 116, 101 }
arg[1]="-Connection"
arg[1]={ 45, 67, 111, 110, 110, 101, 99, 116, 105, 111, 110 }
arg[2]="SubscriptionID=<SUBSCRIPTION_ID>;CertificateThumbprin t=A77B40E35556DFDB09C3B246453A548B2D7B9444"
arg[3]="-HostedServiceName"
arg[3]={ 45, 72, 111, 115, 116, 101, 100, 83, 101, 114, 118, 105, 99, 101, 78, 9
7, 109, 101 }
arg[4]="avkashchauhan"
arg[4]={ 97, 118, 107, 97, 115, 104, 99, 104, 97, 117, 104, 97, 110 }
arg[5]="-Thumbprint"
arg[5]={ 45, 84, 104, 117, 109, 98, 112, 114, 105, 110, 116 }
arg[6]="673eb3d86c3fb01bc58c06915c50cb01cb879a9b"
arg[6]={ 54, 55, 51, 101, 98, 51, 100, 56, 54, 99, 51, 102, 98, 48, 49, 98, 99,
53, 56, 99, 48, 54, 57, 49, 53, 99, 53, 48, 99, 98, 48, 49, 99, 98, 56, 55, 57,
97, 57, 98 }

Uploading service certificate to 'avkashchauhan'.
Service certificate upload complete.
FriendlyName :
Thumbprint : 673EB3D86C3FB01BC58C06915C50CB01CB879A9B
Subject : CN=Avkash Azure Cert2048
IssuedBy : CN=Avkash Azure Cert2048
ValidFrom : 9/1/2011 12:00:00 AM
ValidTo : 12/31/2017 11:00:00 PM
HasPrivateKey : True

Here is the certificate uploaded in my Service Certificate section:

If you want to learn how to create a 2048 bit certificate with exportable private key visit blog below:

http://blogs.msdn.com/b/avkashchauhan/archive/2011/09/21/how-to-generate-2048-bit-certificate-with-makecert-exe.aspx

More info about CSUPLOAD Tool:

http://msdn.microsoft.com/en-us/library/gg466228.aspx


Avkash Chauhan explained How to generate 2048 Bit Certificate with Makecert.exe? in a 9/21/2011 post:

imageThe makecert.exe tool which comes with VS2010 can generate up to 1024 bit certificate. To create a 2048 bit certificate you would need makecert.exe from the Windows SDK 7.1. The details are as below:

Step 1: Download Windows SDK 7.1 from the link below:

http://www.microsoft.com/download/en/details.aspx?id=8279

Step 2: Be sure that you have makecert.exe version 6.1.7600.16385 as below:

Step 3: Now open the Window SDK command prompt window as below:

Step 4: In the opened command windows type the command as below:

C:\Windows\system32>makecert -r -pe -n "CN=Avkash Azure Cert2048" -a sha1 -ss My -len 2048 -sy 24 -b 09/01/2011 -e 01/01/2018

Succeeded

Step 5: Now launch certificate manager using certmgr.msc and verify the certificate as below:

Note: If you are generating certificate for Windows Azure, please use –pe option with makecert.exe so the private key can be exportable. If certificate private key is not exportable you could not upload the certificate to for your Windows Azure application.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Bob Baker described workarounds for Running LightSwitch 2-tier applications on Windows 8 in a 9/24/2011 post to the MicroApplications, Inc. Web Log:

imageI wanted to show a LightSwitch application running on Windows 8 at our last Orlando .Net Users Group meeting, but I didn’t get a chance to work out the kinks that Windows 8 presented in time. It being Saturday today, and the good football games not starting until right about now (3:30 PM EDT, or GMT-5), I took some time this afternoon and worked it out.

image222422222222It’s rather simple actually. I had published the application from my Windows 7 workstation, copied the Publish folder over to the Windows 8 installation that lives on a bootable .vhd on my laptop (this is the approach I used), and followed the instructions in the install.htm file that’s included. I’m actually fibbing about completely following the instructions. Since I had not attempted to install LightSwitch on Windows 8, I skipped the last step.

I ran the database creation script from an elevated command prompt as instructed. To get to an elevated command prompt, you have to right-click the Command Prompt placed on your Windows 8 Metro Start Screen by Visual Studio 11 Developer Edition, and select Run As Administrator by clicking on the Advanced charm in the App Bar that appears (or do it manually if you did not install the full Visual Studio 11). I verified that the web.config in the Application Files folder was set up correctly. But I did NOT run the command line using the Security Admin tool that comes with LightSwitch (Microsoft.LightSwitch.SecurityAdmin.exe). Of course, the application timed out connecting to the database.

Being a bit of a hacker, I wondered (not out loud mind you) whether or not I could just copy the Tools folder where the LightSwitch Security Admin tool lives over to my Windows 8 installation and run it. Guess what? I could, you can, it runs, and my LightSwitch application started right up. I set up the application with Windows Authentication; if you use Forms authentication, you will need to supply a password. I had to put my full name and the path to the web.config in quotes.

Hope this helps. I’m sure the LightSwitch team is already working on making deployment onto Windows 8 and the new release of SQL Server Express much easier.


Beth Massi (@Beth Massi) explained how to have Fun with the Office Integration Pack Extension for LightSwitch in a 9/22/2011 post:

imageLast week Grid Logic released a FREE LightSwitch extension called the Office Integration Pack which has quickly risen to the second most popular LightSwitch extension on VS Gallery! It lets you populate documents and spreadsheets with data, create email and appointments with Outlook, import data from Excel, create PDFs, and a bunch of other stuff from your LightSwitch desktop applications. I’ve been known to do a bit of Office development in my day ;-) so I thought I’d check this extension out myself. In this post I’ll show you a couple tips for exporting data to Excel and Word that I learned while I was playing around.

Installing the Office Integration Pack

image222422222222First thing you need to do is get the Office Integration Pack installed. You’ll also want to download the sample application and documentation. (BTW, the source code is also provided for free here!) You can download and install the Office Integration Pack directly from Visual Studio LightSwitch via the Extension Manager or you can download it manually from the Visual Studio Gallery.

image

Once you install the extension, restart Visual Studio. Then you will need to enable the extension on your LightSwitch project by opening the project properties, clicking the Extensions tab, and then checking the Office Integration Pack.

image

Now let’s explore some of the things this baby can do.

Export to Excel

LightSwitch has a really nice feature on data grids that allow you to export them to Excel:

image

This gives users a basic way of getting data out of the system to create reports or do further analysis on the data using Excel. However, you can’t call this feature from your own code. One of the great features of the Office Integration Pack is it not only lets you call the Export from code, it also allows a bunch of customization. You can control exactly what fields are exported as well as specify what worksheet the data should be exported into.

For instance say I have a list of customers on my own search screen (like pictured above) and I want to provide my own export that only exports CompanyName, ContactName, ContactTitle and Phone fields. In the screen designer first add a button onto the Data Grid’s command bar, I’ll call it ExportToExcel.

image

In the property window you can then specify an image to display if you want. We can also turn off the default Excel export on the grid by selecting the Customers Data Grid and in the properties window check “Disable Export to Excel”

image

Now we need to write some code to export the fields we want. Right-click on the Export to Excel button and select “Edit Execute Code”. We can use a couple different OfficeInetgration.Excel.Export APIs to do what we want here. The way I usually learn about a new API is through IntelliSense so if we start typing “OfficeIntegration (dot) Excel (dot)” you will see the list of available methods:

image

The Export method has four overloads. The first and simplest just takes the data collection and will export the all the data and fields to a new workbook, similar to the built-in Excel export. The second overload lets us specify a particular workbook, worksheet and range. In our case we want to specify particular fields as well and there’s a couple ways we can do that. The 3rd and 4th overloads let us specify a ColumnNames parameter which can take two forms. One is just a simple List(Of String). Just fill the list with the field names you want to export.

Private Sub ExportToExcel_Execute()
 Dim ExcelFile = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments) & "\Customers.xlsx"

    If File.Exists(ExcelFile) Then
        Dim fields As New List(Of String) From
            {"CompanyName", "ContactName", "ContactTitle", "Phone"}

        OfficeIntegration.Excel.Export(Me.Customers, ExcelFile, "Sheet1", "A1", fields)
    End If
End Sub

Another way we can do this is by specifying a List(Of OfficeIntegration.ColumnMappings). The ColumnMappings class is used in many of the APIs particularly the Import method where you could specify both the column in the workbook and the property on the entity in order to map them. In the case of an Export, this isn’t necessary, all I need to do is specify the properties (fields) on the entity I want to export.

Now when we run the application and click our export button we will see only the fields we specified exported to Excel.

image

Export to Word

We can also export data to Word as well. There are a couple methods you can take advantage of here. One is called GenerateDocument which lets you define a template of Content Controls in which the data will be exported. Content controls are a great way for capturing data inside of Word documents. Let’s create a Word document that reports all of a customer’s orders. First I’ll create a Details Screen for my customer and select to include the Customer Orders as well. This will create a one-to-many screen that has the customer detail and a grid of their orders below.

image

Next I’ll add a button to the screen, this time in the screen command bar at the top, called “Generate Document”.

image

Next we need to create the template in Word and add the content controls where we want them which will be populated with data from our customer entity. First enable the Developer tab in Word (File –> Options –> Customize Ribbon, then check to enable the “Developer” tab). Lay out simple text controls around your template and format it how you want. Then click properties to name the controls. You can name the content controls anything you want. Later we will specify the ColumnMapping between the Title of the content controls and the customer properties.

image

We also want to create a table of related orders into this document. In order to create tables, you create a table in Word and then bookmark it. You can optionally create the column headers manually or you can have the Office Integration Pack output them for you. I’ll create a nicely formatted table with two rows for this one with my own column headers in the first row. Then I’ll bookmark it “OrderTable”.

image

Finally we need to write some code to first call GenerateDocument to populate our content controls and then make a call to ExportEntityCollection to export the collection of related Orders into the bookmarked table in Word. I’ll also generate a PDF from this by calling the SaveAsPDF method. Back on the screen right-click on the GenerateDocument command and “Edit Execute Code” and write the following:

Private Sub GenerateDocument_Execute()

    Dim MyDocs = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments)
    Dim WordFile = MyDocs & "\Customer.docx"

    If File.Exists(WordFile) Then

        'Map the content control tag names in the word document to the entity field names
        Dim custFields As New List(Of OfficeIntegration.ColumnMapping)
        custFields.Add(New OfficeIntegration.ColumnMapping("ContactName", "ContactName"))
        custFields.Add(New OfficeIntegration.ColumnMapping("CompanyName", "CompanyName"))
        custFields.Add(New OfficeIntegration.ColumnMapping("Phone", "Phone"))

        Dim doc As Object = OfficeIntegration.Word.GenerateDocument(WordFile, 
Me.Customer, custFields) 'Export specific fields to the bookmarked "OrderTable" in Word Dim orderFields As New List(Of String) From {"ShipName", "OrderDate", "ShippedDate"} OfficeIntegration.Word.ExportEntityCollection(doc, "OrderTable", 2, False,
Me.Customer.Orders, orderFields)
OfficeIntegration.Word.SaveAsPDF(doc, MyDocs & "\Customer.pdf", True) End If End Sub

When you run it you’ll end up with a Word document and a PDF displayed on the screen with our data formatted perfectly!

image

Note that a lot of these methods return the document (or workbook) automation object so you can make additional late-bound COM calls to do anything else you want to automate with Office. If you go that route here are the Excel and Word COM developer references that you’ll want handy ;-).

Have fun using the Office Integration Pack. Thank you Grid Logic!


Michael Washington (@ADefWebserver) quoted Microsoft’s Steve Hoag in a 9/21/2011 post to the OpenLightGroup.net blog:

imageIn this thread in the LightSwitch Forums:

image222422222222http://social.msdn.microsoft.com/Forums/en-US/lightswitch/thread/af1690b4-09d7-4c1e-98b9-d2a1b1e89e9a

image

Read the full thread for additional context.


Rowan Miller of the ADO.NET Entity Framework Team published the second of two posts about Code First Migrations: Alpha 3 Released on 9/21/2011:

A couple of weeks ago we released Alpha 2 of Code First Migrations. Today we are happy to make Alpha 3 available to you.

The overall feedback we got on Alpha 2 was that you like the direction we are headed. We heard that there is definitely a use case for automatic migrations, and while not everyone will use them, we should continue to invest in them. You are asking us to look at the story for data migration that doesn’t involve dropping down to SQL. Alpha 2 also had some commonly encountered bugs that were annoying and we needed to fix

Alpha 3 is still is primarily focused on the developer experience inside of Visual Studio. This release adds most of development time features we plan to include in the first RTM. Your feedback will guide any changes we make to the overall design. Apart from responding to your feedback our team is starting to focus on improving quality. We are also working on completing the deployment and team-build stories, including a command line tool and an MSDeploy provider.

What Changed

Alpha 3 is focused on adding the remaining features that you have been asking for around development time migrations. The primary changes in Alpha 3 are:

  • VB.NET migration generation is now available.
  • Upgrade/Downgrade to any migration is now available via the –TargetMigration switch.
  • Seed data can now be added by overriding the Seed method of the Settings class. Data is expressed in terms of the current model and the Seed method is called after upgrading to the latest migration.
  • We removed TargetDatabase from the Settings class. Instead you can use the –TargetDatabase switch to specify the name of a connection string from your App/Web.config file.
  • You can now start using Code First Migrations against an existing database that was created by Code First. To do this issue an Update-Database command when you model is the same as the model that created the database. To use this feature the EdmMetadata table must be present in the database. This original creation of the database and tables will be counted as an automatic migration, if you want a code-based migration for the initial create of the database and tables you must drop the database and begin using migrations against an empty or non-existent database.
  • Improved error logging when an exception is encountered.
  • The bugs that required SQL Compact and SQL Express to be available are now resolved.
Known Issues

Some known issues, and ‘yet to be implemented’ features include:

  • Code-based migrations and databases generated with Alpha 2 cannot be used with Alpha 3. Because Alpha 2 was such an early release we needed to make some changes to the table we use to store migrations and the way we generate and identify code-based migrations. If you were using Alpha 2 you will need to regenerate migrations and run them to create your database. We realize this is painful but it is one of the trade-offs associated with getting previews into your hands early in the development process. We are making an effort to reduce such changes in future previews.
  • If you are updating the EntityFramework.Migrations package you will need to close and re-open Visual Studio after updating. This is required to reload the updated command assemblies.
  • No outside-of-Visual-Studio experience. Alpha 2 only includes the Visual Studio integrated experience. We also plan to deliver a command line tool and an MSDeploy provider for Code First Migrations.
Feedback

We really want your feedback on Code First Migrations. Is the solution we are delivering what you want? Are there features we should add? Please comment on this post with any feedback you might have.

Getting Started

There are two walkthroughs for Alpha 3, one focuses on the no-magic workflow that uses a code-based migration for every change. The other looks at using automatic migrations to avoid having lots of code in you project for simple changes.

Code First Migrations Alpha 3 is available via NuGet as the EntityFramework.Migrations package.

Prerequisites & Incompatibilities

Migrations is dependent on EF 4.1 Update 1, this updated release of EF 4.1 will be automatically installed when you install the EntityFramework.Migrations package.

Important: If you have previously run the EF 4.1 stand alone installer you will need to upgrade or remove the installation before using migrations. This is required because the installer adds the EF 4.1 assembly to the Global Assembly Cache (GAC) causing the original RTM version to be used at runtime rather than Update 1.

This release is incompatible with “Microsoft Data Services, Entity Framework, and SQL Server Tools for Data Framework June 2011 CTP”.

Support

This is a preview of features that will be available in future releases and is designed to allow you to provide feedback on the design of these features. It is not intended or licensed for use in production. If you need assistance we have an Entity Framework Pre-Release Forum.

Open attached fileLicense.rtf


Rowan Miller of the ADO.NET Entity Framework Team published the first of two posts about Code First Migrations: Alpha 3 ‘No-Magic’ Walkthrough on 9/21/2011:

We have released the third preview of our migrations story for Code First development; Code First Migrations Alpha 3. This release includes a preview of the developer experience for incrementally evolving a database as your Code First model evolves over time.

This post will provide an overview of the functionality that is available inside of Visual Studio for interacting with migrations. We will focus on the ‘no-magic’ workflow for using migrations. In this workflow each change is written out to a code-based migration that resides in your project. There is a separate Code First Migrations: Alpha 3 ‘With-Magic’ Walkthrough that shows how this same set of changes can be applied by making use of automatic migrations.

This post assumes you have a basic understanding of the Code First functionality that was included in EF 4.1, if you are not familiar with Code First then please complete the Code First Walkthrough.

Building an Initial Model

Before we start using migrations we need a project and a Code First model to work with. For this walkthrough we are going to use the canonical Blog and Post model.

  1. Create a new ‘Alpha3Demo’ Console application
    .
  2. Add the EntityFramework NuGet package to the project
    • Tools –> Library Package Manager –> Package Manager Console
    • Run the ‘Install-Package EntityFramework’ command
      .
  3. Add a Model.cs class with the code shown below. This code defines a single Blog class that makes up our domain model and a BlogContext class that is our EF Code First context.

    Note that we are removing the IncludeMetadataConvention to get rid of that EdmMetadata table that Code First adds to our database. The EdmMetadata table is used by Code First to check if the current model is compatible with the database, which is redundant now that we have the ability to migrate our schema. It isn’t mandatory to remove this convention when using migrations, but one less magic table in our database is a good thing right!
    using System.Data.Entity;
    using System.Collections.Generic;
    using System.ComponentModel.DataAnnotations;
    using System.Data.Entity.Infrastructure;
    
    namespace Alpha3Demo
    {
        public class BlogContext : DbContext
        {
            public DbSet<Blog> Blogs { get; set; }
    
            protected override void OnModelCreating(DbModelBuilder modelBuilder)
            {
                modelBuilder.Conventions.Remove<IncludeMetadataConvention>();
            }
        }
    
        public class Blog
        {
            public int BlogId { get; set; }
            public string Name { get; set; }
        }
    }
Installing Migrations

Now that we have a Code First model let’s get Code First Migrations and configure it to work with our context.

  1. Add the EntityFramework.Migrations NuGet package to the project
    • Run the ‘Install-Package EntityFramework.Migrations’ command in Package Manager Console
      .
  2. The EntityFramework.Migrations package has added a Migrations folder to our project. At the moment this folder just contains a single Settings class, this class has also been opened for you to edit. This class allows you to configure how migrations behaves for your context. The Settings class also exposes the provider model for code generation and SQL generation. We’ll just edit the settings class to specify our BlogContext.
    using System.Data.Entity.Migrations;
    using System.Data.Entity.Migrations.Providers;
    using System.Data.SqlClient;
    
    namespace Alpha3Demo.Migrations
    {
        public class Settings : DbMigrationContext<BlogContext>
        {
            public Settings()
            {
                AutomaticMigrationsEnabled = false;
                SetCodeGenerator<CSharpMigrationCodeGenerator>();
                AddSqlGenerator<SqlConnection, SqlServerMigrationSqlGenerator>();
    
                // Uncomment the following line if you are using SQL Server Compact 
                // SQL Server Compact is available as the SqlServerCompact NuGet package
                // AddSqlGenerator<System.Data.SqlServerCe.SqlCeConnection, SqlCeMigrationSqlGenerator>();
    
                // Seed data: 
                //   Override the Seed method in this class to add seed data.
                //    - The Seed method will be called after migrating to the latest version.
                //    - The method should be written defensively in order that duplicate data is not created. E.g:
                //
                //        if (!context.Countries.Any())
                //        {
                //            context.Countries.Add(new Country { Name = "Australia" });
                //            context.Countries.Add(new Country { Name = "New Zealand" });
                //        }
                //
            }
        }
    }
Our First Migration

Code First Migrations has two commands that you are going to become familiar with. Add-Migration will scaffold the next migration based on changes you have made to your model. Update-Database will apply any pending changes to the database.

  1. We haven’t generated any migrations yet so this will be our initial migration that creates the first set of tables (in our case that’s just the Blogs table). We can call the Add-Migration command and Code First Migrations will scaffold a migration for us with it’s best guess at what we should do to bring the database up-to-date with the current model. Once it has calculated what needs to change in the database, Code First Migrations will use the CSharpMigrationCodeGenerator that was configured in our Settings class to create the migration.
    The Add-Migration command allows us to give these migrations a name, let’s just call ours ‘MyFirstMigration’.
    • Run the ‘Add-Migration MyFirstMigration’ command in Package Manager Console
      .
  2. In the Migrations folder we now have a new MyFirstMigration migration. The migration is pre-fixed with a timestamp to help with ordering.
    namespace Alpha3Demo.Migrations
    {
        using System.Data.Entity.Migrations;
        
        public partial class MyFirstMigration : DbMigration
        {
            public override void Up()
            {
                CreateTable(
                    "Blogs",
                    c => new
                        {
                            BlogId = c.Int(nullable: false, identity: true),
                            Name = c.String(),
                        })
                    .PrimaryKey(t => t.BlogId);
                
            }
            
            public override void Down()
            {
                DropTable("Blogs");
            }
        }
    } . 
  3. We could now edit or add to this migration but everything looks pretty good. Let’s use Update-Database to apply this migration to the database.
    • Run the ‘Update-Database’ command in Package Manager Console
      .
  4. Code First Migrations has now created a Alpha3Demo.BlogContext database on our local SQL Express instance. We could now write code that uses our BlogContext to perform data access against this database.

 

Alpha3DemoDatabase
Customizing Migrations

So far we’ve generated and run a migration without making any changes. Now let’s look at editing the code that gets generated by default.

  1. It’s time to make some more changes to our model, let’s introduce a Blog.Rating property and a new Post class.
    public class Blog
    {
        public int BlogId { get; set; }
        public string Name { get; set; }     
        public int Rating { get; set; }
        public List<Post> Posts { get; set; }
    }
    
    public class Post
    {
        public int PostId { get; set; }
        [MaxLength(200)]
        public string Title { get; set; }
        public string Content { get; set; }
    
        public int BlogId { get; set; }
        public Blog Blog { get; set; }
    }  
  2. Let’s use the Add-Migration command to let Code First Migrations scaffold it’s best guess at the migration for us. We’re going to call this migration ‘MySecondSetOfChanges’.
    • Run the ‘Add-Migration MySecondSetOfChanges’ command in Package Manager Console
      .
  3. Code First Migrations did a pretty good job of scaffolding these changes, but there are some things we might want to change:
    • First up, let’s add a unique index to Posts.Title column.
    • We’re also adding a non-nullable Blogs.Rating column, if there is any existing data in the table it will get assigned the CLR default of the data type for new column (Rating is integer, so that would be 0). But we want to specify a default value of 3 so that existing rows in the Blogs table will start with a decent rating.
      These changes to the scaffolded migration are highlighted below:
      namespace Alpha3Demo.Migrations
      {
          using System.Data.Entity.Migrations;
          
          public partial class MySecondSetOfChanges : DbMigration
          {
              public override void Up()
              {
                  CreateTable(
                      "Posts",
                      c => new
                          {
                              PostId = c.Int(nullable: false, identity: true),
                              Title = c.String(maxLength: 200),
                              Content = c.String(),
                              BlogId = c.Int(nullable: false),
                          })
                      .PrimaryKey(t => t.PostId)
                      .ForeignKey("Blogs", t => t.BlogId)
                      .Index(p => p.Title, unique: true);
                  
                  AddColumn("Blogs", "Rating", c => c.Int(nullable: false, defaultValue: 3));
              }
              
              public override void Down()
              {
                  DropForeignKey("Posts", "BlogId", "Blogs", "BlogId");
                  DropColumn("Blogs", "Rating");
                  DropTable("Posts");
              }
          }
      }
  4. Our edited migration is looking pretty good, so let’s use Update-Database to bring the database up-to-date. This time let’s specify the –Verbose flag so that you can see the SQL that Code First Migrations is running.
    • Run the ‘Update-Database –Verbose’ command in Package Manager Console
      .
    Data Motion / Custom SQL

    So far we have just looked at migration operations that don’t change or move any data, now let’s look at something that needs to move some data around. There is no native support for data motion in Alpha 3, but we can run some arbitrary SQL commands at any point in our script.

    1. Let’s add a Blog.Abstract property to our model. Later, we’re going to pre-populate the Abstract for existing posts using some text from the start of the Content column.
      public class Post
      {
          public int PostId { get; set; }
          [MaxLength(200)]
          public string Title { get; set; }
          public string Content { get; set; }
          public string Abstract { get; set; }     
      
          public int BlogId { get; set; }
          public Blog Blog { get; set; }
      }
    2. Let’s use the Add-Migration command to let Code First Migrations scaffold it’s best guess at the migration for us. We’re going to call this migration ‘AddPostAbstract’.
      • Run the ‘Add-Migration AddPostAbstract’ command in Package Manager Console
    3. The generated migration takes care of the schema changes but we also want to pre-populate the Abstract column using the first 100 characters of content for each post. We can do this by dropping down to SQL and running an UPDATE statement after the column is added.
      namespace Alpha3Demo.Migrations
      {
          using System.Data.Entity.Migrations;
          
          public partial class AddPostAbstract : DbMigration
          {
              public override void Up()
              {
                  AddColumn("Posts", "Abstract", c => c.String());
                  
                  Sql("UPDATE dbo.Posts SET Abstract = LEFT(Content, 100) WHERE Abstract IS NULL");
              }
              
              public override void Down()
              {
                  DropColumn("Posts", "Abstract");
              }
          }
      }
    4. Our edited migration looks good, so let’s use Update-Database to bring the database up-to-date. We’ll specify the –Verbose flag so that we can see the SQL being run against the database.

      • Run the ‘Update-Database –Verbose’ command in Package Manager Console
        .
    Migrate to a Specific Version (Including Downgrade)

    So far we have always upgraded to the latest migration, but there may be times when you want upgrade/downgrade to a specific migration.

    1. Let’s say we want to migrate our database to the state it was in after running our ‘MyFirstMigration’ migration. We can use the –TargetMigration switch to downgrade to this migration.
      • Run the ‘Update-Database –TargetMigration:"MyFirstMigration"’ command in Package Manager Console

    This command will run the Down script for our ‘AddBlogAbstract’ and ‘MySecondSetOfChanges’ migrations. If you want to roll all the way back to an empty database then you can use the Update-Database –TargetMigration:"0" command.

    Getting a SQL Script

    Now that we have performed a few iterations on our local database let’s look at applying those same changes to another database.

    If another developer wants these changes on their machine they can just sync once we check our changes into source control. Once they have our new migrations they can just run the Update-Database command to have the changes applied locally.

    However if we want to push these changes out to a test server, and eventually production, we probably want a SQL script we can hand off to our DBA. In this preview we need to generate a script by pointing to a database to migrate, but in the future you will be able to generate a script between two named versions without pointing to a database.

    1. We’re just going to simulate deploying to a second database on the local SQL Express instance. Add an App.config file to your project and include a ‘MySecondDatabase’ connection string.
      <?xml version="1.0" encoding="utf-8" ?>
      <configuration>
        <connectionStrings>
          <add name="MySecondDatabase"
               providerName="System.Data.SqlClient"
               connectionString="Server=.\SQLEXPRESS;Database=AnotherDatabase;Trusted_Connection=True;"/>
        </connectionStrings>
      </configuration>
    2. Now let’s run the Update-Database command but this time we’ll specify the –TargetDatabase flag to use the connection string we just added to the configuration file. We’ll also specify the –Script flag so that changes are written to a script rather than applied.
      • Run the ‘Update-Database –TargetDatabase:"MySecondDatabase" –Script’ command in Package Manager Console
        .
    3. Code First Migrations will run the migration pipeline but instead of actually applying the changes it will write them out to a .sql file for you. Once the script is generated, it is opened for you in Visual Studio, ready for you to view or save.
    Summary

    In this walkthrough you saw how to scaffold, edit and run code-based migrations to upgrade and downgrade your database. You also saw how to get a SQL script that represents the pending changes to a database.

    As always, we really want your feedback on what we have so far, so please try it out and let us know what you like and what needs improving.


    Return to section navigation list>

    Windows Azure Infrastructure and DevOps

    Chris Hoff (@Beaker) posted Flying Cars & Why The Hypervisor Is A Ride-On Lawnmower In Comparison on 9/23/2011:

    imageI wrote a piece a while ago (in 2009) titled “Virtual Machines Are Part Of the Problem, Not the Solution…” in which I described the fact that hypervisors, virtualization and the packaging that supports them — Virtual Machines (VMs) — were actually kludges.

    Specifically, VMs still contain the bloat (nee: cancer) that are operating systems and carry forward all of the issues and complexity (albeit now with more abstraction cowbell) that we already suffer. Yes, it brings a lot of GOOD stuff, too, but tolerate the analog for a minute, m’kay.

    Moreover, the move in operational models such as Cloud Computing (leveraging the virtualization theme) and the up-stack crawl from IaaS to PaaS (covered also in a blog I wrote titled: Silent Lucidity: IaaS – Already A Dinosaur?) seems to indicate a general trending toward a reduction in the number of layers in the overall compute stack.

    Something I saw this morning reminded me of this and its relation to how the evolution and integration of various functions — such as virtualization and security — directly into CPUs themselves are going to dramatically disrupt how we perceive and value “virtualization” and “cloud” in the long run.

    I’m not going to go into much detail because there’s a metric crapload of NDA type stuff associated with the details, but I offer you this as something you may have already thought about and the industry is gingerly crawling toward across multiple platforms. You’ll have to divine and associate the rest:

    Think “Microkernels”

    …and in the immortal words of Forrest Gump “That’s all I’m gonna say ’bout that.”

    /Hoff

    * Ray DePena humorously quipped on Twitter that “…the flying car never materialized,” to which I retorted “Incorrect. It has just not been mass produced…” I believe this progression will — and must — materialize.

    imageNo significant articles today.


    David Linthicum (@DavidLinthicum) recommended “Before you move to the cloud, take some time to consider your IT architecture” in a deck for his The secret to cloud success: Get a grasp on SOA post of 9/22/2011 to InfoWorld’s Cloud Computing blog:

    imageThose IT organizations that move to cloud computing are moving to SOA (service-oriented architecture), whether they understand it or not. Hear me out: Private and public clouds often rely on APIs for their functionality, which are typically Web services that can be combined and recombined into solutions. The result: SOA, at its essence.

    imageThe problem is that many of those who define and implement cloud computing don't have a good grasp of SOA. Although they are building a SOA by default, they have no handle on the proper steps and the interworkings of all the pieces. They end up with a Franken-SOA, where some aspects of the cloud solution are better thought out than others. …

    What's a Franken-SOA? It's a bunch of cloud services that become parts of core applications or processes, mostly on-premise. These services provide core functionality, including storage and compute features, that are used in a composite application or perhaps a composite process. However, they're used without a good architectural structure and become both difficult to change and difficult to manage.

    In Franken-SOAs, there is no governance, no identity management, no service management, and no service discovery. It's like driving an Indy car without a steering wheel. It's very powerful, but you will hit the wall -- quickly.

    The tragedy of the situation is that cloud-driven Franken-SOAs can be avoided with some planning and architectural forethought. But most of those who define the use of clouds these days are more about speed to deployment than thinking through the architecture. Indeed, many consider cloud computing to be a replacement for SOA. They don't grasp the value of SOA -- or any architecture and planning discipline, for that matter.

    I suspect that Franken-SOAs will continue to walk the earth. Hopefully, the angry villagers with torches and pitchforks will drive them out at some point.


    <Return to section navigation list>

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    Mathew Weinberger (@M_Wein) asked New HP CEO Meg Whitman: Will eBay Experience Help HP Cloud Push? in a 9/23/2011 post:

    imageNow that HP has named Meg Whitman president and CEO, Talkin’ Cloud wonders: Can Whitman leverage her eBay experience to get Hewlett-Packard’s cloud services strategy in order? Under former HP CEO Leo Apotheker, we were underwhelmed by HP’s cloud efforts. In early 2011, Apotheker predicted that HP would compete against Apple iTunes, Amazon Web Services and other big commercial cloud platforms. But we never really got a feel for what would make HP “unique” in the cloud market.

    imagePerhaps Whitman can find the answer — but it’s going to require some work. For starters, HP has suffered from numerous media leaks and strategic changes in 2011. Also, HP Chairman Ray Lane has been busy defending how HP’s board has managed its own responsibilities over the past year.

    imageRight now, HP’s cloud strategy is vastly focused on infrastructure and data center hardware. Instead of developing its forthcoming public cloud platform in-house, HP is building it on top of the OpenStack platform. That’s not a bad play, but it speaks to how HP is largely attempting to leverage its hardware expertise in the cloud market and leave development of the major leaps in software and services to outside parties.

    But over her ten years at eBay, Whitman grew the company from 30 employees and $4 million in revenue to 15,000 employees and $8 billion. And that was entirely built on top of eBay’s web commerce and SaaS expertise. While the old eBay-Skype combo didn’t really end up paying off for eBay, it definitely speaks to Whitman’s propensity for what Google might call a 100% web approach.

    So this is pure speculation, but I’d not be surprised at all if Whitman refocuses HP’s cloud efforts to play to her strengths: namely web-based applications and social portals. And rest assured, we’ll continue to watch HP’s cloud moves closely.

    Read More About This Topic

    imageMatt didn’t mention that eBay was one of the early implementers of WAPA (during Whitman’s watch) and HP announced its intention to do so during Microsoft’s Worldwide Partners Conference in 2010.


    <Return to section navigation list>

    Cloud Security and Governance

    Bill Kleyman explained Data security in the cloud: tactics and practices in a 9/22/2011 post to the SearchCloudComputing.com blog:

    Data security is a concern for any enterprise, and cloud computing often can magnify security anxieties. Adopting a few ground rules will help protect users, their data and your overall cloud investment.

    The list of security concerns with cloud computing may seem lengthy. In reality, though, cloud security tactics can fall into two main categories: partner-based security or security for Software as a Service, Platform as a Service or Infrastructure as a Service models and end user-based or client-based security. Here are a few guidelines for securing a private or public cloud.

    Strategically plan your cloud security. Every environment is unique. Give careful consideration to how corporate workloads should be delivered to end users. Placing security at the forefront during the initial planning phase creates a solid foundation and allows compliance-conscious organizations to create a resilient and audit-ready cloud infrastructure.

    Pick your cloud vendor wisely. According to the Cloud Security Alliance, data loss and leakage are the top security threats of cloud computing. It's crucial to choose a cloud partner that can protect your enterprise's sensitive data. When evaluating a cloud partner for corporate IT services, make sure the vendor has experience in both IT and security services. Verify that cloud-ready risk mitigation is part of the provider's common security practice. And evaluate only cloud providers that have a proven track record integrating IT, security and network services and can provide strategic service-performance assurances.

    Formulate an identity management system. Every enterprise environment will likely have some sort of identity management system that controls user access to corporate data and computing resources. When moving to the public cloud or building a private cloud, identity federation should be a major consideration.

    A cloud provider must be willing to integrate an existing identity management system into its infrastructure using identity federation or single sign-on (SSO), or provide its own identity management system. Without this, environments create identity pools in which end users must use multiple sets of credentials to access common workloads.

    Protect corporate data in the cloud. In a secure IT organization, data from one end user is properly segmented from that of another user. In other words, data at rest must be stored securely and data in motion must move securely from one location to another without interruption. Reputable cloud partners have can prevent data leaks or ensure that unauthorized third parties cannot access data. It's important to clearly define roles and responsibilities to ensure that users -- even privileged users -- cannot circumvent auditing, monitoring and testing, unless otherwise authorized.

    Develop an active monitoring system. Enterprises must continuously monitor data in the cloud. Performance bottlenecks, system instabilities or other issues must be caught early to avoid any outages in services. Failure to constantly monitor the health of a cloud environment will result in poor performance, possible data leaks and angry end users. Organizations that are cloud-ready must plan which monitoring tools to use and how often they must track and monitor data.

    For example, a company pushing a virtual desktop to the cloud may be interested in the following metrics:

    • SAN use
    • WAN operation
    • Networking issues or bottlenecks
    • Log-in data, i.e., failed attempts, lockout information
    • Gateway information
      • Where are users coming from, is there suspicious traffic coming into the private cloud
      • How are IP addresses being used? Is internal gateway routing functioning properly?

    After that, you can implement manual or automated procedures to respond to any events or outages that occur. It's very important to understand the value behind actively monitoring a cloud solution. By constantly keeping an eye on the cloud environment, IT administrators can proactively resolve issues before an end-user can notice them.

    Establish cloud performance metrics and test regularly. When researching a cloud service provider -- for public cloud or private cloud -- check that the vendor presents a solid service-level agreement that includes metrics like availability, notification of a breach, outage notification, service restoration, average resolution times and so on. Regular proactive testing will remove a great deal of security risks or potential for data leaks.

    Even though your cloud provider conducts testing, it's imperative to also have internal test procedures in place. IT managers know the environment -- and its end-users' demands -- best. Inconsistencies or irregularities in how cloud-based workloads are being used can lead to security breaches or data leaks.

    Next steps: Identity federation in the cloud
    Thorough security tactics must be in place, starting from the host level and continuing all the way through the cloud infrastructure and to the end user. There are several tools on the market to help enterprises secure an investment in cloud computing.

    Identity federation, for example, helps take credential management to the next level by securing a cloud infrastructure. Cloud computing offers great benefits to those environments prepared to make the investment, as long as they make wise and well-researched decisions when evaluating cloud security options.

    More on security in the cloud:

    Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management.

    Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


    <Return to section navigation list>

    Cloud Computing Events

    James Governor (@monkchips) reported on 9/23/2011 Monktoberfest is coming. Where Tech Meets Social Meets… Beer. In New England on 10/6/2011:

    imageRealised today that while I have tweeted about it, I haven’t yet written a post about an exciting event we’re running in a couple of weeks, on October 6th.

    Basically we wanted to double down on the intersection of social and tech (think Github). And our love of amazing beer of course (fancy a visit to one of the best brew pubs in the world for the evening meal?).

    imageThe Speakers are impressive – folks like Mike Olson (CEO Cloudera), Zack Urlocker (COO ZenDesk), Matt LeMay (Director of Platform, Bit.ly), (Greg Avola, Co-Founder / Lead Developer, Untappd), Theo Schlossnagle, (CEO, Omniti), Donnie Berkholz, Sr. Developer, Gentoo Linux) and Steve Citron-Pousty (Technology Evangelist at deCarta). But the delegates are just as cool, and the hallway conversation promises to be rich and engaging.

    imageThere are a few tickets left, so you should sign up here.


    Eric Nelson (@ericnel) listed Windows Azure sessions from the BUILD conference in a 9/23/2011 post:

    imageAlthough the BUILD conference was primarily about Windows 8, there was still a lot of great sessions on Windows Azure. Now… how do we collectively find the time to watch them :-).

    Overview

    image

    Windows 8 and Windows Azure

    Data

    Devices

    Operations

    Development/Other

    Related Links:

    I also update my post for supplemental posts by session presenters, such as Clemens Vasters and Michael Washam.


    Jim O’Neil (reported on 9/22/2011 Code Camp NYC–October 1, 2011 to be held at the Pace University campus:

    For those of you on the southern edge of Chris’ and my stomping grounds (or those of you looking for a good excuse to head to the big city), here’s a great event to put on your calendar.

    CodeCamp NYC

    Like all the code camps you’ve attended, this is an event “by the community, for the community.” This sixth edition has moved from the Microsoft Office to the campus of Pace University in order to support an expected 700 attendees!

    There’s a minimal cost to attend and the chance to hear experts in their fields as well as network with like-minded technologists is, well, priceless. Check out the current session list (with more to come!) and invest in yourself!

    Register before next Monday, September 26th, to guarantee your spot!


    Brian Hitney reported Azure and Phone … Better Together on 9/21/2011:

    imageWe had an excellent time presenting today’s Windows Phone Camp in Charlotte. Thank you to everyone who attended. Here are some resources and major points of today’s “To the cloud…” session.

    First, here is the slide deck for the presentation.

    To The Cloud...

    image

    Next, download the Windows Azure Toolkit for Windows Phone. This contains both the sending notifications sample, and the Babelcam application. Note that there are quite a few setup steps – using the Web Platform Installer is a great way to make all of this easier.

    imageThe key takeaway that I really wanted to convey: while the cloud is most often demonstrating massive scale scenarios, it’s also incredibly efficient at micro scale. The first scenario we looked at was using Azure Blob Storage as a simple (yet robust) way to host assets. Think of Blob Storage as a scalable file system with optional built in CDN support. Regardless of where your applications of hosted (shared host, dedicated hosting, or your own datacenter), and regardless of the type of application (client, phone, web, etc.) the CDN offers a tremendously valuable way to distribute those resources.

    For MSDN subscribers, you already have access so there’s no excuse to not use this benefit. But even if you had to go out of pocket, hosting assets in Azure is $0.15/GB per month, + $0.01/10,000 transactions, + $0.15/GB outbound bandwidth (inbound is free). For small applications, it’s almost free. Obviously you need to do the math for your app, but consider hosting 200MB in assets (images, JS files, XAPs, etc.), a million transactions a month with several GB of data transfers: it’s very economical at the cost of a few dollars / month.

    In the second demo, we looked at using Azure Queues to enhance the push notification service on the phone. The idea being that we’ll queue failed notifications, and retry them for some specified period of time. For the demo, I only modified the raw notifications. In PushNotificationsController.cs (in toolkit demo above), I modified SendMicrosoftRaw slightly:

    [HttpPost]
    public ActionResult SendMicrosoftRaw(string userId, string message)
    {
    if (string.IsNullOrWhiteSpace(message))
    {
    return this.Json("The notification message cannot be null, empty nor white space.",
    JsonRequestBehavior.AllowGet);
    }

    var resultList = new List<MessageSendResultLight>();
    var uris = this.pushUserEndpointsRepository.GetPushUsersByName(userId).Select(u => u.ChannelUri);
    var pushUserEndpoint = this.pushUserEndpointsRepository.GetPushUsersByName(userId).FirstOrDefault();

    var raw = new RawPushNotificationMessage
    {
    SendPriority = MessageSendPriority.High,
    RawData = Encoding.UTF8.GetBytes(message)
    };

    foreach (var uri in uris)
    {
    var messageResult = raw.SendAndHandleErrors(new Uri(uri));
    resultList.Add(messageResult);

    if (messageResult.Status.Equals(MessageSendResultLight.Error))
    {
    this.QueueError(pushUserEndpoint, message);
    }
    }

    return this.Json(resultList, JsonRequestBehavior.AllowGet);
    }

    Really the only major change is that if the messageResult comes back with an error, we’ll log the error. QueueError looks like this:

    private void QueueError(PushUserEndpoint pushUser, string message)
    {
    var queue = this.cloudQueueClient.GetQueueReference("notificationerror");

    queue.CreateIfNotExist();
    queue.AddMessage(new CloudQueueMessage(
    string.Format("{0}|{1}", pushUser.ChannelUri.ToString(), message)
    ));
    }

    We’re simply placing the message on the queue with the data we want: you need to get used to string parsing with queues. In this case, we’ll delimit the data (which is the channel URI and the message of the notification) with a pipe character. While the channel URI is not likely to change, it’s a better approach to store the username and not the URI in the message, and instead do a lookup of the current URI before sending (much like the top of SendMicrosoftRaw does), but for the purposes of the demo is fine.

    When we try sending a raw notification when the application isn’t running, we’ll get the following error:

    image_thumb3

    Typically, without a queue, you’re stuck. Using a tool like Cloud Storage Studio, we can see the notification is written to the failure queue, including the channel URI and the message:

    image_thumb5

    So, now we need a simple mechanism to poll for messages in the queue, and try to send them again. Because this is an Azure webrole, there’s a way to get a “free” thread to do some processing. I say free because it’s invoked by the Azure runtime automatically, so it’s a perfect place to do some background processing outside of the main site. In Webrole.cs, you’ll see there is no Run() method. The base WebRole Run() method does nothing (it does an indefinite sleep), but we can override that. The caveat is, we never want to exit this method. If an exception bubbles out of this method, or we forget to loop, the role will recycle when the method exits:

    public override void Run()
    {
    this.cloudQueueClient = cloudQueueClient ??
    GetStorageAccountFromConfigurationSetting().CreateCloudQueueClient();
    var queue = this.cloudQueueClient.GetQueueReference("notificationerror");
    queue.CreateIfNotExist();

    while (true)
    {
    Thread.Sleep(200);

    CloudQueueMessage message = queue.GetMessage(TimeSpan.FromSeconds(60));

    if (message == null) continue;

    if (message.DequeueCount > 60)
    {
    queue.DeleteMessage(message);
    continue;
    }

    string[] messageParameters = message.AsString.Split('|');

    var raw = new RawPushNotificationMessage
    {
    SendPriority = MessageSendPriority.High,
    RawData = Encoding.UTF8.GetBytes(messageParameters[1])
    };

    var messageResult = raw.SendAndHandleErrors(new Uri(messageParameters[0]));

    if (messageResult.Status.Equals(MessageSendResultLight.Success))
    {
    queue.DeleteMessage(message);
    }
    }
    }

    What this code is doing, every 200 milliseconds, is looking for a message on the failure queue. Messages are marked with a 60 second timeout – this will act as our “retry” window. Also, if we’ve tried to send the message more than 60 times, we’ll quit trying. Got to stop somewhere, right?

    We’ll then grab the message from the queue, and parse it based on the pipe character we put in there. We’ll then send another raw notification to that channel URI. If the message was sent successfully, we’ll delete the message. Otherwise, do nothing and it will reappear in 60 seconds.

    While this code is running in an Azure Web Role, it’s just as easy to run in a client app, service app, or anything else. Pretty straightforward, right? No database calls, stored procedures, locking or flags to update. Download the completed project (which is the base solution in the toolkit plus these changes) here (note: you’ll still need the toolkit):

    VS2010 Solution

    The final demo was putting it all together using the Babelcam demo – Azure queues, tables, notifications, and ACS.

    Questions or comments? Let me know.


    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    Charles Babcock claimed “Diablo upgrade of cloud open source project adds a service monitoring dashboard, like Amazon's CloudWatch, as well as Active Directory user authentication” in a deck for his OpenStack Adds Private Cloud-Building Features article of 9/23/2011 for InformationWeek:

    imageOpenStack, one of several open source options for building a private cloud, launched its fourth release in a little over a year on Thursday with several features that make it easier to manage an enterprise cloud.

    OpenStack is the big open source project founded by NASA and Rackspace in July 2010 that competes with Eucalyptus Systems, an Amazon Web Services compatible offering, and Nimbula, a vendor neutral cloud operating system from the architects of AWS' EC2. In addition, three startups--Piston Cloud Computing, Cloud.com, and Nebula--have adopted OpenStack as the basis of their commercial offerings, making it the favorite for building a private cloud. Former NASA CTO Chris Kemp left the space agency in order to found Nebula, a private cloud appliance company, which launched in July of this year; the appliance runs OpenStack software. …

    imageOpenStack has attracted the broadest following so far in terms of cloud open source code software. The project lists over 100 member companies, including Citrix Systems, Dell, and Cisco--and got a boost earlier this month when HP announced it will adopt the OpenStack software as the heart of its own cloud initiative. HP has a representative, the recently hired former head of Rackspace software development John Purrier, on the OpenStack board. HP will sponsor the OpenStack Summit which opens in Boston on Oct. 3. HP's cloud with both compute and storage services launched Sept. 7, but it is still a private beta offering.

    imageIn its fourth release, known as Diablo, OpenStack adds several feature that make it easier to implement a private cloud for enterprise use. Strictly speaking, OpenStack can be used by either cloud service providers or companies seeking to build their own cloud but the Diablo release primarily addresses the latter. …

    For example, OpenStack now includes a dashboard for monitoring an OpenStack cloud's health, a boon to IT managers and business managers alike who wish to check on the continued operation of their workloads. One of the Amazon Web Services EC2's most popular features is its CloudWatch service health dashboard, giving visibility into individual EC2 data centers.

    The dashboard is the result of a new project within OpenStack, lead by Nebula representatives. The Web-based user interface provides a way for an OpenStack implementer to graphically represent services and report on their operation, said Devin Carlen, project technical lead and VP of engineering at Nebula, in the OpenStack announcement.

    A second newly incubated project within OpenStack and part of the Diablo release is Keystone. Rackspace engineers lead the Keystone project, which seeks to provide a unified means of authenticating users across a OpenStack cloud by making use of Microsoft's Active Directory and LDAP-based user directories. By incorporating a company's existing directories into a cloud's operation, Keystone resolves one of the stumbling blocks to establishing a private cloud.

    Another added feature is a distributed scheduler, which makes it possible for multiple virtual machines to be deployed at the same time in geographically distinct locations to run the same workload. The practice ensures a system will remain available, even if one cluster or individual cloud data center goes down unexpectedly. It helps private cloud builders implement high availability systems.


    Jinesh Varia (@jinman) described a New Whitepaper: Amazon's Corporate IT Deploys Corporate Intranet Running SharePoint 2010 on AWS in a 9/21/2011 post:

    imageWithin Amazon, we often use the phrase "drinking our own champagne" to describe our practice of using our own products and services to prove them out under actual working conditions. We build products that we can use ourselves. We believe in them.

    AmazonsharepointAmazon's Corporate IT recently wrapped up an important project and they have just documented the entire project in a new technical whitepaper.

    Download whitepaper (PDF)

    imageAmazon's Corporate IT team deployed its corporate intranet to Amazon EC2 instances running Microsoft SharePoint 2010 and SQL Server 2008, all within a Virtual Private Cloud (Amazon VPC). This is a mission-critical internal corporate application that must deal with a large amount of very sensitive data.

    The whitepaper describes the entire deployment process in step by step fashion: initial requirements analysis, security review, deployment success criteria, proof of concept, application architecture, configuration of SharePoint 2010 and SQL Server, and final production deployment.

    There are a number of reasons why I am so excited about this project:

    1. During the deployment process our Corporate IT team treated AWS as they would treat any other vendor. They leveraged the same products that our other customers use. They paid for the AWS Premium Support service and received pre-implementation advice from our AWS Solution Architects the same way we give to other enterprise customers. They conducted a thorough security review and decided to encrypt all data at rest and in flight.They used EBS snapshots to reduce the risk of losing data, and also implemented a failover mechanism that can attach an existing EBS volume to a fresh EC2 instance when necessary.
    2. This project involved commercial software licenses and demonstrates that the flexibility of AWS allows our customers to run commercial enterprise-grade software (like Microsoft SharePoint and SQL Server Enterprise) in the cloud. The whitepaper not only discusses the technical architecture and implementation details but also how you can leverage key security features (like Windows DPAPI for Key management) to further enhance the security and reliability of your applications. Today, with Microsoft License Mobility with Software Assurance, you can bring your existing licenses of several Microsoft Windows server applications to the cloud.
    3. Real benefits emerged:
      • Infrastructure procurement time was reduced from over four to six weeks to minutes.
      • Server image build process that had previously taken a half day is now automated.
      • Annual infrastructure costs were cut by 22 percent when on-premise hardware was replaced with equivalent cloud resources.
      • Operational overhead of server lease returns were eliminated, freeing up approximately 2 weeks of engineering overhead per year by replacing servers with equivalent cloud resources.

    Today, you can run enterprise software from Microsoft, Oracle, SAP, IBM and several other vendors in the AWS Cloud. If you are an ISV and you'd like to move your products to the cloud, we're ready to help. The AWS ISV program offers a wide variety of sales, technical, marketing, PR, and alliance benefits to qualified ISVs and solution providers.

    The paper is a great example of how a complex mission-critical application can be deployed to the cloud in a way that makes it more reliable, more flexible, and less expensive to operate. Read it now and let me know what you think.


    Jeff Barr (@jeffbarr) described Scientific Computing with EC2 Spot Instances in a 9/21/2011 post:

    imageDo you use EC2 Spot Instances in your application? Do you understand how they work and how they can save you a lot of money? If you answered no to any of these questions, then you are behind the times and you need to catch up.

    I'm dead-serious.

    imageThe scientific community was quick to recognize that their compute-intensive, batch workloads (often known as HPC or Big Data) were a perfect fit for EC2 Spot Instances. These AWS customers have seen cost savings of 50% to 66% when compared to running the same job using On-Demand instances. They are able to make the best possible use of their research funds. Moreover, they can set the Spot price to reflect the priority of the work, bidding higher in order to increase their access to cycles.

    Our friends over at Cycle Computing have used Spot Instances to create a 30,000 core cluster that spans 3 AWS Regions. They were able to run this cluster for nine hours at a cost of $1279 per hour (a 57% savings vs. On-Demand). The molecular modeling job running on the cluster consumed 10.9 compute years and had access to over 30 TB of RAM.

    Harvard’s Laboratory for Personalized Medicine (LPM) uses Amazon EC2 Spot Instances to run genetic testing models and simulations, and stretch their grant money even further. One day of engineering allowed them to save roughly 50% on their instance costs moving forward.

    Based on the number of success stories that we have seen in the scientific arena, we have created a brand new (and very comprehensive) resource page dedicated for scientific researchers using Spot Instances. We've collected a number of scientific success stories, videos, and other resources. Our new Scientific Computing Using Spot Instances page has all sorts of goodies for you.

    Among many new and unique things you will find:

    • A case study from Harvard Medical School. They run patient (virtual avatar) simulations on EC2. After one day's worth of engineering effort, they now run their simulations on Spot Instances and have realized a cost savings of over 50%. Some of the work described in this case study is detailed in a new paper, Biomedical Cloud Computing With Amazon Web Services.
    • A video tutorial that will show you how to use the newest version of MIT's StarCluster to launch an entire cluster of Spot Instances in minutes. This video was produced by our friends at BioTeam.
    • A video tutorial that will show you how to launch your Elastic MapReduce jobs flows on Spot Instances.
    • Detailed technical and business information about the use of Spot Instances for scientific applications including a guide to getting started and information on migrating your applications.
    • Common architectures (MapReduce, Grid, Queue, and Checkpoint) and best practices.
    • Additional case studies from DNAnexus, Numerate, University of Melbourne/University of Barcelona, BioTeam, CycleComputing, and EagleGenomics.
    • A list of great Solution Providers who can help you get started if you need a little extra assistance migrating to Spot Instances.
    • Documentation and tutorials.
    • Links to a number of research papers on the use of Spot Instances.
    • Other resources like our Public Data Sets on AWS and AWS Academic programs.

    Spot Instances work great for scientific Research, but there are a huge number of other customers out there that also love Spot. As an example Spot works really well for loads of other use cases like analytics, big data, financial modeling, geospatial analysis, image and media encoding, testing, and web crawling. Check out this brand new video for more information on common use cases and example customers who leverage them.

    Again, if you don't grasp the value of Spot Instances, you are behind the times. Check out our new page and bring yourself up to date today.

    If you have a scientific computing success story of your own (with or without Spot) or have feedback on how to make Spot even better, we'd love to hear more about it. Please feel free to post a comment to the blog or to email it to us at spot-instance-feedback@amazon.com.

    Finally, if you are excited about Spot and want to join our team, please contact Kelly O’Mara at komara@amazon.com to learn more about the team and our open positions.


    <Return to section navigation list>

    0 comments: