Wednesday, May 09, 2012

Windows Azure and Cloud Computing Posts for 5/7/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Gaurav Mantri (@gmantri) started a new series with Comparing Windows Azure Blob Storage and Amazon Simple Storage Service (S3)–Part I on 5/9/2012, which begins:

imageIn this blog post, we’re going to compare Windows Azure Blob Storage Service and Amazon Simple Storage Service (S3) from core functionality point of view. In this blog post we’re going to focus on core concepts, pricing and feature comparison between blob containers and buckets (defined below). In Part II of this blob post we’ll focus on comparing blobs and objects (defined below).

imageFor the sake of brevity, we’re going to refer Windows Azure Blob Storage as WABS and Amazon Simple Storage Service as AS3 in the rest of this blog post.

imageFrom fundamental functionality point of view, both WAQS and ASQS provides similar functionality. Simply put, both can be considered as file system in the cloud allowing you to store huge amount of unstructured data (usually in form of files).

In both systems, you can create one or more blob containers or buckets which will hold zero or more blobs or objects respectively.

Both systems provide a REST based API for working with queues and messages and other higher level language libraries which are essentially wrappers implementing REST API. Over the years, both systems have evolved in terms of functionality provided. In both systems, each release of the API is versioned and is specified as a date. At the time of writing this blog, the service version number for WABS is 2011-08-18 while that of AS3 is 2006-03-01.

At a very high level, both systems provide similar functionality. Here are some of them:

  • Both systems in essence are file system in the cloud with two levels of hierarchy.
  • Both systems allow you to store large amount of data reliably and cheaply.
  • Both systems allow you to protect your content from unauthorized access.
  • Both systems allow you to keep multiple versions of a same object however they way versioning is implemented is different in both systems.
  • Both systems allow you to expose the contents of a blob container and bucket through their respective content delivery networks (CDN) for lower latency and content caching.
  • Both systems provide access control mechanisms to protect data stored. AS3 has many options (like Amazon Identity and Access Management (IAM), Bucket Policies, ACLs and Query String Authentication) where as WABS provide ACLs and Shared Access Signatures.
  • Both systems allow you to keep many versions of an object however the implementation of versioning is different.

There are a few differences as well. We’ll talk more about it later in the blog post, but here are some of the major differences:

  • WABS only support HTTP (via REST) protocol however AS3 supports HTTP (via REST and SOAP) as well as BitTorrent protocol for doing peer-to-peer distribution of content.
  • AS3 has public mechanisms in place to import and export extremely large amount of data (Amazon Import/Export). This feature is not publicly available in WABS yet.
  • In AS3 you can set objects to auto delete after a certain amount of time. This feature is not currently available in WABS.
  • AS3 allows you to charge your customers for their usage of AS3 resources stored in your account using Amazon DevPay. This is a great enabler for building SaaS applications. This feature is not currently available in WABS. Yet another feature available in AS3 which is not available in WABS is Requester Pay Buckets where users accessing the data stored in your buckets pay for the usage.
  • AS3 allows you to encrypt the data stored using Server Side Encryption (SSE). This feature is not available in WABS.
  • AS3 supports both virtual-hosted-style (e.g. http://mybucket.s3.amazon.com/myobject) and path-style (e.g. http://s3-eu-west-1.amazonaws.com/mybucket/myobject) whereas WABS only support path-style (e.g. http://myaccount.blob.core.windows.net/myblobcontainer/myblob).
  • AS3 offers a Reduced Redundancy Storage (RRS) feature where in customers can opt to store their data at lower level of redundancy (99.99% durability and 99.99% availability) than AS3’s standard redundancy (99.999999999% durability and 99.99% availability) thus lowering their storage costs. I think it’s pretty neat to have this option to lower my storage costs if my data is not critical and easily reproducible. WABS offer one level of redundancy. …

And continues with a very detailed comparison of the two offerings. Gaurav’s camera appears to have very low resolution.

Gaurav also posted Comparing Windows Azure Table Storage and Amazon DynamoDB–Summary on 4/30/2012 (missed when published).


Doug Henschen (@DHenschen) assertedd “HBase still has a few 'rough edges,' but that hasn't kept this NoSQL database from becoming one of the hottest pockets within the white-hot Hadoop market” as a deck for his HBase: Hadoop's Next Big Data Chapter article of 5/8/2012 for InformationWeek:

image_thumb3_thumbThe Apache Hadoop Framework has many components, including the MapReduce distributed data-processing model, Hadoop Distributed File System (HDFS), Pig data-flow language, and Hive distributed data warehouse. But the part stealing much of the attention these day--and arguably maturing most rapidly--is HBase.

HBase is Hadoop's NoSQL database. Patterned after Google BigTable, HBase is designed to provide fast, tabular access to the high-scale data stored on HDFS. High-profile companies including Facebook and StumbleUpon use HBase, and they're part of a fast-growing community of contributors who are adding enterprise-grade reliability and performance upgrades at steady clip.

Why is HBase so important?

First and foremost, it's part of the Hadoop Framework, which is at the epicenter of a movement in which companies large and small are making use of unprecedented volumes and varieties of information to make data-driven decisions. Hadoop not only handles data measured by the tens or hundreds of terabytes or more, it can process textual information like social media streams, complex data like clickstreams and log files, and sparse data with inconsistent formatting. Most importantly, it does all this at low cost, powered by open source software running on highly scalable clusters of inexpensive, commodity X86 servers.

The core data-storage layer within Hadoop is HDFS, a distributed file system that supports MapReduce processing, the approach that Hadoop practitioners invariably use to boil down big data sets into the specific information they're after. For operations other than MapReduce, HDFS isn't exactly easy to work with. That's where HBase comes in. "Anybody who wants to keep data within an HDFS environment and wants to do anything other than brute-force reading of the entire file system [with MapReduce] needs to look at HBase," explains Gartner analyst Merv Adrian. "If you need random access, you have to have HBase."

HBase offers two broad use cases. First, it gives developers database-style access to Hadoop-scale storage, which means they can quickly read from or write to specific subsets of data without having to wade through the entire data store. Most users and data-driven applications are used to working with the tables, columns, and rows of a database, and that's what HBase provides.

Second, HBase provides a transactional platform for running high-scale, real-time applications. In this role, HBase is an ACID-compliant database (meeting standards for Atomicity, Consistency, Isolation, and Durability) that can run transactional applications. That's what conventional relational databases like Oracle, IBM DB2, Microsoft SQL Server, and MySQL are mostly used for, but HBase can handle the incredible volume, variety, and complexity of data encountered on the Hadoop platform. Like other NoSQL databases, it doesn't require a fixed schema, so you can quickly add new data even if it doesn't conform to a predefined model.

Life sciences research firm NextBio uses Hadoop and HBase to help big pharmaceutical companies conduct genomic research. The company embraced Hadoop in 2009 to make the sheer scale of genomic data-analysis more affordable. The company's core 100-node Hadoop cluster, which has processed as much as 100 terabytes of data, is used to compare data from drug studies to publically available genomics data. Given that there are tens of thousands of such studies and 3.2 billion base pairs behind each of the hundreds of genomes that NextBio studies, it's clearly a big-data challenge.

NextBio uses MapReduce processing to handle the correlation work, but until recently it stored the results--now exceeding 30 billion rows of information--exclusively on a MySQL database. This conventional relational database offers fast storage and retrieval, but NextBio knew it was reaching the limits of what a conventional database like MySQL could handle in terms of scale--at least without lots of database administrative work and high infrastructure cost. …

Read more: 2, Next Page »


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

The SQL Azure Team reported [SQL Azure Database] [North Central US] [Yellow] Intermittent Timeouts on 8/6/2012:

  • imageApr 30 2012 8:06PM We are actively investigating an intermittent timeout issue in SQL Azure which is likely to happen during database copies or sharing splits. We are working to resolve it as soon as possible. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.
  • May 2 2012 12:55AM We continue to work on repair steps to mitigate the issue. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.
  • May 6 2012 2:15AM We have found the issue and implemented a resolution. Service is now running as normal. We apologize for any inconvenience this causes our customers.

Similar problems occurred in the South Central US Data Center.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

My (@rogerjenn) Creating An Incremental SQL Azure Data Source for OakLeaf’s U.S. Air Carrier Flight Delays Dataset post updated 5/8/2012 begins:

imageMy initial U.S. Air Carrier Flight Delays, Monthly dataset for the Windows Azure Marketplace DataMarket was intended to incorporate individual tables for each month of the years 1987 through 2012 (and later.) I planned to compare performance of datasets and Windows Azure blob storage a persistent data sources for Apache Hive tables created with the new Apache Hadoop on Windows Azure feature.

imageI used Microsoft Codename “Data Transfer” to create the first two of these SQL Azure tables, On_Time_Performance_2012_1 and On_Time_Performance_2012_2, from corresponding Excel On_Time_Performance_2012_1.csv and On_Time_Performance_2012_2.csv files in early May 2012. For more information about these files and the original U.S. Air Carrier Flight Delays, Monthly dataset see my Two Months of U.S. Air Carrier Flight Delay Data Available on the Windows Azure Marketplace DataMarket post of 5/4/2012.

image_thumb15_thumbSubsequently, I discovered that the Windows Azure Marketplace Publishing Portal had problems uploading the large (~500,000 rows, ~15 MB) On_Time_Performance_YYYY_MM.csv files. I was advised by Microsoft’s Group Program Manager for the DataMarket that the *.csv upload feature would be disabled to “prevent confusion.” For more information about this issue, see my Microsoft Codename “Data Transfer” and “Data Hub” Previews Don’t Appear Ready for BigData post updated 5/5/2012.

A further complication was the suspicion that editing the current data source to include each additional table would require a review by a DataMarket proctor. An early edit of one character in a description field had caused my dataset to be offline for a couple of days.

A workaround for the preceding two problems is to create an on-premises clone of the SQL Azure table with a RowID identity column and recreate the SQL Azure table without the identity property on the RowID column. Doing this permits using a BULK INSERT instruction to import new rows from On_Time_Peformance_YYYY_MM.csv files to the local SQL Server 2012 table and then use George Huey’s SQL Azure Migration Wizard (SQLMW) v3.8.7 or later to append new data to a single On_Time_Performance SQL Azure table. Managing primary key identity values of an on-premises SQL Server table is safer and easier than with SQL Azure.

The downside of this solution is that maintaining access to the 1-GB SQL Azure Web database will require paying at least US$9.99 per month plus outbound bandwidth charges after your free trial expires. Microsoft provides free SQL Azure storage when you specify a new database in the Windows Azure Marketplace Publishing Portal.

This post describes the process and T-SQL instructions for creating and managing the on-premises SQL Server [Express] 2012 databases, as well as incrementally uploading new data to the SQL Azure database. …

And continues with a detailed tutorial having the following sections:

  • Creating the SQL Azure On_Time_Performance Table
  • Creating an On-Premises SQL Server Clone Table
  • Importing *.csv Data with the BULK IMPORT Command
  • Uploading Data to the SQL Azure Table with SQLAzureMW
  • Calculating the Size of the SQL Azure Database and Checking for Upload Errors
  • Conclusion


The WCF Data Services Team announced WCF Data Services, now with more releases! on 5/7/2012:

imageLike other teams at Microsoft, WCF Data Services has been working toward a goal of more frequent releases. We released 5.0 on April 9 and we pre-released WCF Data Services 5.0.1-rc* on April 20 (we’ll release the final version of 5.0.1 very soon). The rapid release was possible because of three changes we’re making. First up, we’re…

Adopting semantic versioning

Semantic versioning is a growing movement that proposes a solution to confusing version numbers. Let’s see how semantic versioning applies to the pre-release: 5.0.1-rc.

  • The first digit is the major version. Semantic versioning states that the major version should be bumped only when there are breaking changes in a public API.
  • The second digit is the minor version. Semantic versioning asserts that the minor version should be bumped when new functionality is introduced to the public API.
  • The third digit is the patch version. Semantic versioning says that the patch version should be bumped for bug fixes that do not affect the public API.
  • The hypenated string is the pre-release version. Semantic versioning claims that the pre-release version should be appended to anything that is not a release.

Since 5.0.1 only contains bug fixes, we bumped only the third digit of the version number. The prerelease that went out on April 20 was suffixed with -rc to indicate that the bits are not yet release quality.

* Note: When we initially pushed out the prerelease, we thought we would need the minor version increment. This turned out not to be the case, so we fixed the version number to be the correct value according to semantic versioning. We hid the 5.1.0-rc prerelease today, replaced it with 5.0.1-rc and will release the final bits as 5.0.1. If you have installed 5.1.0-rc, you’ll need to uninstall that package before you can install 5.0.1-rc or 5.0.1.

Moving to semantic versioning facilitates our decision to…

Distribute new releases via NuGet

NuGet is a fantastic binary distribution system for a number of reasons. NuGet enables us to release more frequently without causing additional confusion – the version history on the package page clearly displays the versions that have been released:

image

NuGet also makes it easy to take a dependency on a particular version of an assembly with commands like Install-Package Microsoft.Data.Services.Client –Version 5.0.1. Furthermore, NuGet makes it very easy to recognize prereleases and even provides a special version of the command line that support prerelease versions:

image

NuGet also makes it easier for us to achieve a third goal…

Changing to a “bin deploy” model

If you’re not familiar with the term, a bin deploy model is one in which you simply copy the contents of the bin directory to the target location – there are no MSIs involved, no GAC – just DLLs and your preferred file transfer mechanism.

There are many reasons to move to a bin deploy model. Two of the more obvious reasons are:

  1. A bin deploy model simplifies deployment and increases the number of potential deployment targets. You don’t need your Web server to run a particular MSI that puts the appropriate assemblies into the GAC. This opens up a whole host of scenarios, including deploying OData services to hosting providers that won’t run the WCF Data Services MSI. It also increases confidence that when an application is deployed, it has all of its dependencies readily available.
  2. A bin deploy model allows trust levels that are commonly leveraged by hosting providers.

We’ll be writing more about NuGet and bin deploy in the near future, but for now we would love to hear your…

Feedback please!

We’d love to hear what you think about 5.0.0, our more frequent releases, and our changes to distribution/deployment. Feel free to leave a comment below!


Judith Hurwitz (@jhurwitz) announced the start of a new series of blog posts for Bloomberg BusinessWeek with What is the big deal about big data? of 5/3/2012:

imageI have begun writing a weekly blog for Bloomberg BusinessWeek. I am pleased to be able to republish my discussions on my blog site. In this blog I discuss the value of big data and its implications for the business. Your comments on this blog are welcomed.

The ability to analyze mountains of data is important, but managers need to be wary of suppliers relabeling a traditional product as new technology.


<Return to section navigation list>

Windows Azure Service Bus, Access Control, Integration Services, Identity and Workflow

Richard Seroter (@rseroter) posted Interview Series: Four Questions With … Dean Robertson on 5/9/2012:

imageI took a brief hiatus from my series of interviews with “connected systems” thought leaders, but we’re back with my 39th edition. This month, we’re chatting with Dean Robertson who is a longtime integration architect, BizTalk SME, organizer of the Azure User Group in Brisbane, and both the founder and Technology Director of Australian consulting firm Mexia. I’ll be hanging out in person with Dean and his team in a few weeks when I visit Australia to deliver some presentations on building hybrid cloud applications.

imageLet’s see what Dean has to say.

Q: In the past year, we’ve seen a number of well known BizTalk-oriented developers embrace the new Windows Azure integration services. How do you think BizTalk developers should view these cloud services from Microsoft? What should they look at first, assuming these developers want to explore further?

A: I’ve heard on the grapevine that a number of local BizTalk guys down here in Australia are complaining that Azure is going to take away our jobs and force us all to re-train in the new technologies, but in my opinion nothing could be further from the truth.

BizTalk as a product is extremely mature and very well understood by both the developer & customer communities, and the business problems that a BizTalk-based EAI/SOA/ESB solution solves are not going to be replaced by another Microsoft product anytime soon. Further, BizTalk integrates beautifully with the Azure Service Bus through the WCF netMessagingBinding, which makes creating hybrid integration solutions (that span on-premises & cloud) a piece of cake. Finally the Azure Service Bus is conceptually one big cloud-scale BizTalk messaging engine anyway, with secure pub-sub capabilities, durable message persistence, message transformation, content-based routing and more! So once you see the new Azure integration capabilities for what they are, a whole new world of ‘federated bus’ integration architectures reveal themselves to you. So I think ‘BizTalk guys’ should see the Azure Service Bus bits as simply more tools in their toolbox, and trust that their learning investments will pay off when the technology circles back to on-premises solutions in the future.

As for learning these new technologies, Pluralsight has some terrific videos by Scott Seely and Richard Seroter that help get the Azure Service Bus concepts across quickly. I also think that nothing beats downloading the latest bits from MS and running the demo’s first hand, then building their own “Hello Cloud” integration demo that includes BizTalk. Finally, they should come along to industry events (<plug>like Mexia’s Integration Masterclass with Richard Seroter</plug> ) and their local Azure user groups to meet like-minded people love to talk about integration!

Q: What integration problem do you think will get harder when hybrid clouds become the norm?

A: I think Business Activity Monitoring (BAM) will be the hardest thing to consolidate because you’ll have integration processes running across on-premises BizTalk, Azure Service Bus queues & topics, Azure web & worker roles, and client devices. Without a mechanism to automatically collect & aggregate those business activity data points & milestones, organisations will have no way to know whether their distributed business processes are executing completely and successfully. So unless Microsoft bring out an Azure-based BAM capability of their own, I think there is a huge opportunity opening up in the ISV marketplace for a vendor to provide a consolidated BAM capture & reporting service. I can assure you Mexia is working on our offering as we speak.

Q: Do you see any trends in the types of applications that you are integrating with? More off-premise systems? More partner systems? Web service-based applications?

A: Whilst a lot of our day-to-day work is traditional on-premises SOA/EAI/ESB, Mexia has also become quite good at building hybrid integration platforms for retail clients by using a combination of BizTalk Server running on-premises at Head Office, Azure Service Bus queues and topics running in the cloud (secured via ACS), and Windows Service agents installed at store locations. With these infrastructure pieces in place we can move lots of different types of business messages (such as sales, stock requests, online orders, shipping notifications etc) securely around world with ease, and at an infinitesimally low cost per message.

As the world embraces cloud computing and all of the benefits that it brings (such as elastic IT capacity & secure cloud scale messaging) we believe there will be an ever-increasing demand for hybrid integration platforms that can provide the seamless ‘connective tissue’ between an organisations’ on-premises IT assets and their external suppliers, branch offices, trading partners and customers.

Q [stupid question]: Here in the States, many suburbs have people on the street corners who swing big signs that advertise things like “homes for sales!’ and “furniture – this way!” I really dislike this advertising model because they don’t broadcast traditional impulse buys. Who drives down the street, sees one of these clowns and says “Screw it, I’m going to go pick up a new mattress right now.” Nobody. For you, what are your true impulse purchases where you won’t think twice before acting on an urge, and plopping down some money.

A: This is a completely boring answer, but I cannot help myself on www.amazon.com. If I see something cool that I really want to read about, I’ll take full advantage of the ‘1-click ordering’ feature before my cognitive dissonance has had a chance to catch up. However when the book arrives either in hard-copy or on my Kindle, I’ll invariably be time poor for a myriad of reasons (running Mexia, having three small kids, client commitments, etc.) so I’ll only have time to scan through it before I put it on my shelf with a promise to myself to come back and read it properly one day. But at least I have an impressive bookshelf!

Thanks Dean, and see you soon!


Chris Klug (@ZeroKoll) described Securing a NancyFx module with the Azure Access Control Service on 5/8/2012:

imageIn my previous post I gave a semi-quick introduction to NancyFx. This time, I want to take Nancy and combine it with Azure ACS. Not a very complicated thing as such, but still something I want to do as I enjoy working with both technologies.

imageJust as in the last post, I will self-host Nancy in a console application, and use NuGet to get it going. I will also re-use the “www.nancytesting.org” domain I set up in my hosts file in the last post.

Once I got my console application going with a host, and an empty NancyModule, it is time to start looking at the ACS.

The first thing I need to do is to set up a new ACS relying party. If you have not used the ACS before, I recommend reading up a bit on it to understand how it works. My own introduction might be useful.

In this case, I configure my Relying Party Application to use the Uri “http://www.nancytesting.org/” for both the realm and the return url. As for token format, I went with SAML 2.0 as it was the initially selected… I then select no token encryption, support for both Google and Live ID, and a new default rule group. When it comes to the token signing certificate, I add a certificate I have created on my own, which is also placed in the “Trusted People” container on my local machine… And then as a final thing, I go into the newly created rule group and generate the default rules… Now the ACS should be ready to handle my requests…time to start working on the Nancy end of things…

The first thing I set up is a default route, just to make sure everything works…

public class AcsModule : NancyModule
{
public AcsModule()
{
Get["/"] = parameters => "Hello World";
}
}

And as expected, pointing my browser to http://www.nancytesting.org/ returns “Hello World”…time to move on and secure that route using the ACS…

Lucky for me, Microsoft provides us with a lot of help when it comes to federated security. They do this by giving us the Windows Identity Foundation SDK. Unfortunately, that whole thing is expecting you to use ASP.NET. It seems to prefer you configuring it through a config file, and pretty much expects that authorization is managed through HttpModules. This is somewhat of a problem when it comes to Nancy. And if you think it is just a quick fix, you will soon realize that the way it works is heavily geared towards having an HttpRequest, which once again does not exist in Nancy… It also means that we are back to relying on System.Web, which Nancy does fine without, but that’s ok for this time…

So…the solution for me was to run through the code using .NET Reflector and figure out what is happening internally. Lucky for you, I have already done this, and I won’t run through all the stuff I found. Instead I will just go through how I got it to work.

It might be that it had been easier to read the documentation, but what fun would that be…

As I still want to use WIF, you have to have the SDK installed, and add a reference to Microsoft.IdentityModel to the project. But to keep all of the WIF stuff out of the way, I put it all in a static helper class called AcsHelper.

The interface for the AcsHelper class looks like this

public static class AcsHelper
{



public static void Configure(string audienceUri, string ns, string realm, string thumbPrint);

public static bool IsSignInResponse(dynamic form);
public static bool TryParseSignInResponse(Uri baseUri, Stream response, out SecurityToken token);
public static bool VerifyTokenXml(string tokenXml, out SecurityToken token);
public static string SerializeToken(SecurityToken token);
public static string GetLoginUrl();

}

Remember, this is just a quick spike to see that it works, so the code might not be perfect if we put it like that…but please bear with me as I break it down and have a look at the interesting pieces anyway.

Let’s start by looking at the Configure() method. This is where all the configuration of the WIF parts are made (duh…). It replaces the config file that is normally used by WIF.

private const string ACSLoginUrlFormat = "https://{0}.accesscontrol.windows.net:443/v2/wsfederation?wa=wsignin1.0&wtrealm={1}";
private static string _acsLoginUrl;
private static SecurityTokenHandlerConfiguration _securityTokenHandlerConfiguration;

public static void Configure(string audienceUri, string ns, string realm, string thumbPrint)
{
_acsLoginUrl = string.Format(ACSLoginUrlFormat, ns, HttpUtility.UrlEncode(realm));

_securityTokenHandlerConfiguration = new SecurityTokenHandlerConfiguration();
_securityTokenHandlerConfiguration.AudienceRestriction.AllowedAudienceUris.Add(new Uri(audienceUri));

var issuerNameRegistry = new ConfigurationBasedIssuerNameRegistry();
issuerNameRegistry.AddTrustedIssuer(thumbPrint, string.Format("https://{0}.accesscontrol.windows.net/", ns));
_securityTokenHandlerConfiguration.IssuerNameRegistry = issuerNameRegistry;
}

It starts out by creating and storing the Url to the login page in the ACS. It then creates a new instance of SecurityTokenHandlerConfiguration, which is then configured to accept the passed in audienceUri as an allowed audience.

Next a new IssureNameRegistry is created and added to the SecurityTokenHandlerConfiguration instance. In this case I use the ConfigurationBasedIssuerNameRegistry, which allows me to configure certificate thumbprints and issuer Urls manually. I configure it using the passed in certificate thumbprint and the Url to the ACS namespace.

Once that is done, the config should be done. If you have other requirements such as encrypted tokens or other token formats, then you would have to modify the configuration to suit your needs…

The next method is the IsSignInResponse, which is responsible for looking at a posted form and define whether the request is a sign in response from the ACS. It looks like this

public static bool IsSignInResponse(dynamic form)
{
return form["wa"] == "wsignin1.0";
}

So all it really does, is to look at the posted form and see if there is a posted value with the key “wa”. Simple, but effective…

The next method, TryParseSignInResponse() is a bit more complicated. It takes the response stream and converts it to a string. It then parses that string using the HttpUtility.ParseQueryString() to get to the form data in the form of a NameValueCollection. This can then be parsed by WIF and turned into a SignInResponseMessage, which in turn contains a bunch of XML that that can be turned into a RequestSecurityTokenResponse by using an instance of WSFederationSerializer. That response in turn contains some more XML that we pass to the next method called VerifyTokenXml().

So to make a long story short, we take the XML returned from the ACS and parse that using some classes from WIF to finally end up with some other XML, or rather a subset of the original XML, that we can then pass to another method to create a SecurityToken.

To be honest, I am not 100% sure what the layers of parsing here does (not a security guy that knows a whole lot about tokens and stuff), but the end result seems to just be a subset of the original XML. It might be possible to get to this by doing some XML manipulation on your own, but I thought I would rather do it like the people who wrote the WIF stuff intended me to do…

public static bool TryParseSignInResponse(Uri baseUri, Stream response, out SecurityToken token)
{
var responseString = new StreamReader(response).ReadToEnd();
var form = HttpUtility.ParseQueryString(responseString);
var responseMessage = (SignInResponseMessage)WSFederationMessage.CreateFromNameValueCollection(baseUri, form);
WSFederationSerializer federationSerializer;
using (XmlDictionaryReader r = XmlDictionaryReader.CreateTextReader(Encoding.UTF8.GetBytes(responseMessage.Result), new XmlDictionaryReaderQuotas()))
{
federationSerializer = new WSFederationSerializer(r);
}
var context = new WSTrustSerializationContext();
var tokenXml = federationSerializer.CreateResponse(responseMessage, context).RequestedSecurityToken.SecurityTokenXml.OuterXml;
return VerifyTokenXml(tokenXml, out token);
}

The VerifyTokenXml that gets passed the resulting XML does some more XML work to make sure that the token that is inside the XML can be understood and turned into a SecurityToken.

The SecurityToken is then used to create a new new ClaimsPrinciple, which is used to set the current principle on the Thread. A new type of token, a SessionSecurityToken, is then created and returned.

The reason for the second token is that it is a lot smaller than the original, so when it is set as a cookie, it doesn’t overflow the 4k limit.

public static bool VerifyTokenXml(string tokenXml, out SecurityToken token)
{
token = null;
try
{
using (var reader = XmlReader.Create(new StringReader(tokenXml)))
{
var securityTokenHandlers = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection(_securityTokenHandlerConfiguration);
if (securityTokenHandlers.CanReadToken(reader))
{
token = securityTokenHandlers.ReadToken(reader);
reader.Close();

var claims = securityTokenHandlers.ValidateToken(token);
var principal = ClaimsPrincipal.CreateFromIdentities(claims);
System.Threading.Thread.CurrentPrincipal = principal;

token = new SessionSecurityToken(principal, null, token.ValidFrom, token.ValidTo);
return true;
}
}
}
catch{}
return false;
}

Another little disclaimer here would be regarding setting the principle on the current thread… I assume that this thread comes off a thread pool somewhere. In a real world application, I would suggest resetting the principle at the end of the request so you don’t end up with a bogus principle when the thread is picked up in the future… But on the other hand, that would just be my assumption, and we all know that assumptions are the mother of all !¤% ups…

Ok, almost done with the WIF parts now. The last method worth looking at is the SerializeToken() method. It takes a SecurityToken and serializes it for storage in a cookie…

public static string SerializeToken(SecurityToken token)
{
var tokenString = new StringBuilder();
var xmlWriter = XmlWriter.Create(tokenString);
var securityTokenHandlers = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection(_securityTokenHandlerConfiguration);
securityTokenHandlers.WriteToken(xmlWriter, token);
return tokenString.ToString();
}

Ok, so what does all of this have to do with NancyFx? Didn’t I promise that I would use Nancy as well? Well, I guess it is time to do so…so back to the NanyModule I created earlier…

Currently it has a single route configured, but I do need a few more. First of all, the ACS will actually pass the token information to me using a POST. So I have to make sure that Nancy allows that POST. by adding an empty route for it.

I also need a login page. So I set up a GET-route for that, returning a sshtml-view containing some text saying that you have to login, and a login link. I pass the Url to the ACS login page as a model to the view…

And finally, I need to be able to sign out. I am not putting in a link for that in my “/” reply, but I will make it possible to log out if one knows the Url. So I add a another GET-route that empties out my security cookie and redirects back to the login page.

The actual redirect is done through an sshtml-view that redirects using a JavaScript. The reason for not just emptying the cookie while passing an HTTP 307 straight away is that some browsers have issues with this. Emptying out the cookie while passing a view that will do the redirect for me solves that issue…

Post["/"] = parameters => "";
Get[LoginPath] = parameters => View["LoginView.sshtml", new { Url = AcsHelper.GetLoginUrl() }];
Get["SignOut"] = parameters =>
{
var view = View["RedirectView.sshtml", new { Url = LoginPath }];
view.AddCookie(TokenCookieName,string.Empty);
return view;
};

I’m not too happy about the emptying of the cookie. I would much rather remove it, but I couldn’t find a way to do so using Nancy. So if you know how to do that, please let me know. Right now it creates an empty cookie, which is really annoying…

Ok, so where is the “magic” happening? Well, it is in the Before pipeline, which I have hooked up to point to a method that handles authentication…

The first thing I do there is to check if the request is for the login page. If it is, I just return. I can’t secure the login page, that would cause some issues…

Next, I use the AcsHelper to check if the request is a sign in response. If it is, I once again use the AcsHelper to parse the response. If that works, I set a cookie with the token in it, and redirect to “/” using the afore mentioned method… If it doesn’t, I send a redirect response, redirecting to the login page…

If it isn’t a sign in response, I make sure that the request carries a security token cookie, and that the cookie contains a valid token. If it doesn’t, I redirect the user to the login page… And if it does, I let the request through…

private Response OnBefore(NancyContext ctx)
{
if (ctx.Request.Path == LoginPath)
return null;

SecurityToken token;
if (AcsHelper.IsSignInResponse(ctx.Request.Form))
{
if (AcsHelper.TryParseSignInResponse(new Uri(Context.Request.Url.ToString()), Context.Request.Body, out token))
{
var view = View["RedirectView.sshtml", new { Url = "/" }];
view.AddCookie(TokenCookieName, AcsHelper.SerializeToken(token));
return view;
}
return Response.AsRedirect(LoginPath, RedirectResponse.RedirectType.Temporary);
}

if (!ctx.Request.Cookies.ContainsKey(TokenCookieName) || !AcsHelper.VerifyTokenXml(HttpUtility.UrlDecode(ctx.Request.Cookies[TokenCookieName]), out token))
{
return Response.AsRedirect(LoginPath, RedirectResponse.RedirectType.Temporary);
}

return null;
}

That’s it! An Azure ACS secured NancyModule…

The next step I want is obviously to roll this into a re-usable thing, and maybe integrate it a bit better with Nancy. But as this was, as previously mentioned, a quick spike, that will have to be pushed into the future. Maybe after I have had time to talk to @TheCodeJunkie about how to do that…

And as usual, code can be downloaded here: NancyACS.zip (642.03 kb)


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Michael Collier (@MichaelCollier) described Sending Text Messages from Your Windows Azure Service in a 5/9/2012 post:

imageRecently I was waiting at O’Hare for a flight back to Columbus. I fired up Evernote to catch up on some Windows Azure articles I wanted to read (I save a lot of things to Evernote). One of the first articles I read was a CodeProject article by Luke Jefferson at Red Gate. Luke’s article provides a really nice basis for using Cerebrata’s PowerShell Cmdlets and Twilio for sending SMS messages when your Windows Azure service has “issues”.

This got me thinking – could I write something similar (I was inspired) before I returned home to Columbus? I’d like to tweak it a little to run as a Windows Azure worker role. I would have to do most of the writing without the crutch of the Internet (remember – no WiFi at O’Hare and no WiFi on my plane). I would have a brief internet connection before getting on the plane if I used my awesome Nokia Lumia 900 for tethering. This is doable!

imageThe timing couldn’t have been better! I remembered seeing a Twitter message earlier in the day from @WindowsAzure announcing a new promotion from Twilio to get some free messages for Windows Azure customers. Perfect time to try this out!


By the way, if you don’t have Windows Azure yet, now is a good time to get it.


While Luke’s article on CodeProject uses PowerShell cmdlets, I wanted to try it with regular C# code and run the solution in a simple Windows Azure worker role. To do so I would need to work with the Windows Azure Service Management API. As Luke points out in his article, the Windows Azure Service Management API is a REST API. He does a great job of explaining the basics, so be sure to check it out and head over to MSDN for all the details.

Unfortunately there is not yet a nice .NET wrapper for this API. I took at look at Azure Fluent Management library, but it didn’t yet have all the features needed for this little pet project (but looks to be cool – something to keep an eye on). Thankfully, I remembered I had read Neil Mackenzie’s excellent Microsoft Windows Azure Development Cookbook and it contained a recipe for getting the properties of a hosted service. Bingo! This recipe is a very helpful one (like many in Neil’s book) and I had the code standing by in a sample project I put together while reading Neil’s book. With the starting point for using the Windows Azure Service Management API in place, the only thing I needed now was an API for working with Twilio

Time to fire up the phone tethering feature, sign up for Twilio, and download their API. I decided to use the twilio-csharp client library and installed that via NuGet. Easy enough. With everything that I needed downloaded, it was time to power down and get on the plane.

The basics of what I wanted to do are pretty simple:

  1. Get the properties of a specific Windows Azure hosted service
  2. Check if the service is Running or not
  3. If the service is not Running, send a SMS message to me to let me know that something is not right.
  4. Sleep for a little bit and then repeat the process.

Get Hosted Service Properties

private HostedServiceProperties GetHostedServiceProperties(string subscriptionId, string serviceName, string thumbprint)
{
String uri = String.Format(ServiceOperationFormat, subscriptionId, serviceName);

ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint);
XDocument hostedServiceProperties = operation.Invoke(uri);

var deploymentInformation = (from t in hostedServiceProperties.Elements()
select new
{
DeploymentStatus = (from deployments in t.Descendants(WindowsAzureNamespace + "Deployments")
select deployments.Element(WindowsAzureNamespace + "Deployment").Element(WindowsAzureNamespace + "Status").Value).First(), RoleCount = (from roles in t.Descendants(WindowsAzureNamespace + "RoleList")
select roles.Elements()).Count(), InstanceCount = (from instances in t.Descendants(WindowsAzureNamespace + "RoleInstanceList")
select instances.Elements()).Count()
}).First();

var properties = new HostedServiceProperties
{
Status = deploymentInformation.DeploymentStatus,
RoleCount = deploymentInformation.RoleCount,
InstanceCount = deploymentInformation.InstanceCount
};

return properties;
}

Send a Message if Not Running

     // Get the hosted service
     var serviceProperties = GetHostedServiceProperties(subscriptionId, hostedServiceName, managementCertificateThumbprint);

     // If the service is not running.
     if (serviceProperties.Status != "Running")
     {
          string message = string.Format("Service '{0}' is not running.  Current status is '{1}'.",
                                                   hostedServiceName, serviceProperties.Status);

          // Send the SMS message
          twilio.SendSmsMessage(fromPhoneNumber, toPhoneNumber, message);
     }
}

Putting it All Together

 private readonly XNamespace WindowsAzureNamespace = "http://schemas.microsoft.com/windowsazure";
 private const string ServiceOperationFormat = "https://management.core.windows.net/{0}/services/hostedservices/{1}?embed-detail=true";

public override void Run()
 {
 Trace.WriteLine("Staring Windows Azure Notifier Role", "Information");

// Get the configuration settings needed to work with the hosted service.
 var hostedServiceName = GetConfigurationValue("HostedServiceName");
 var subscriptionId = GetConfigurationValue("SubscriptionId");
 var managementCertificateThumbprint = GetConfigurationValue("ManagementCertificateThumbprint");

// Get the configuration settings for Twilio.
 var twilioId = GetConfigurationValue("TwilioId");
 var twilioToken = GetConfigurationValue("TwilioToken");
 var fromPhoneNumber = GetConfigurationValue("FromPhoneNumber");
 var toPhoneNumber = GetConfigurationValue("ToPhoneNumber");

// Create an instance of the Twilio client.
 var twilio = new TwilioRestClient(twilioId, twilioToken);

while (true)
 {
 // Get the hosted service
 var serviceProperties = GetHostedServiceProperties(subscriptionId, hostedServiceName, managementCertificateThumbprint);

 // If the service is not running.
 if (serviceProperties.Status != "Running")
 {
  string message = string.Format("Service '{0}' is not running. Current status is '{1}'.",
  hostedServiceName, serviceProperties.Status);

 // Send the SMS message
 twilio.SendSmsMessage(fromPhoneNumber, toPhoneNumber, message);
 }

 Thread.Sleep(TimeSpan.FromMinutes(5));
 Trace.WriteLine("Working", "Information");
 }
 

I had most of this put together before the stewardess informed me it was time to power down in preparation for landing. The only things I needed yet was a Windows Azure subscription ID, a Windows Azure management certificate thumbprint, and a hosted service to test this against. I have several Windows Azure hosted services running for various reasons, so all this was easy enough to get.

With all the necessary bits in place, it’s time to test this out. In order to get this working in Windows Azure (as opposed to my local emulator) I would need to include my management certificate as part of the role I’m deploying. When using the emulator, the certificate is already on my development machine so I didn’t need to do anything special.

Add the certificate to the role

I would also upload this certificate as a service certificate.

Windows Azure Hosted Service Certificates

With the certificates in place, I was ready to deploy the service as an Extra Small worker role. I then picked one of my demo apps and told Windows Azure to shut it down. Shortly after that, I received a new text message from my Twilio account!

While this was just a simple proof-of-concept, it was pretty cool and can be pretty powerful.

If you’d like the whole source code for this project, you can download it here.


Haishi Bai (@HaishiBai2010) explained Azure Compute Emulator Trick: Resetting a single web role in a multi-role solution in a 5/9/2012 post:

imageI have a Azure application that comprises two roles – one web role and one worker role. In a particular scenario, I want to restart the web role in Compute Emulator while keep the worker role running. Unfortunately the current version of Compute Emulator only allows you to restart at deployment level. What’s the workaround?

imageA web role, which is essentially an Asp.Net application, monitors web.config file. When the file is changed, the website assemblies are rebuilt (well, to be exact, rebuilt by JIT when next request arrives). This gives us an easy way to reset a web role while keeping the worker role working by modifying the web.config file - such as adding a comment element. Once you update the web.config file, you’ll notice the web role is restarted in Compute Emulator UI when you hit the web site again:

image

Another approach (thanks for my colleague Avilay) you can use is to explicitly request a role to be recycled in your code. The following code snippet is a sample MVC controller method that request hosting environment to recycle the current instance by calling RoleEnvironment.RequestRecycle() API:

public EmptyResult Recycle()
{
   RoleEnvironment.RequestRecycle();
   return new EmptyResult();
}

Using this method, you can recycle a specific instance of a role while keeping other instances of the same role (as well as other roles) running.


Wely Lau (@wely_live) explained Installing Third Party Software on Windows Azure – What are the options? in a 5/9/2012 post to Red Gate Software’s ACloudyPlace blog:

imageI have seen this question asked many times now: “How do I install third party software on Windows Azure?” This is a reasonably important question to address as Windows Azure applications often need to use third party software components.

imageIn some cases, using a software component can be as simple as adding a reference to it. You can also set the Copy Local property to True to bring the component along with your service package to the cloud. However, in some cases a proper installation is required. This is because the installation does other things than just copying the component to the system (such as: modifying registry, register the components to GAC, etc.) One example would be when installing Report Viewer on the Web Role to display reports.

This article will explain three techniques you can use to install third party software on Windows Azure. We will cover why and how to install third party software, and the catches that come with each technique.

Before diving into the specific techniques, let’s refresh the concept behind the current version of Windows Azure PAAS as it relates to what we’ll be discussing.

Design for Scale: Windows Azure Stateless VM

Windows Azure emphasizes the application philosophy of scaling-out (horizontally) instead of scaling-up (vertically). To achieve this, Windows Azure introduces the stateless virtual machine (VM). This means a VM’s local disks will not be used for storage since they are considered stateless or non-persistent. Any changes made after the VM is provisioned will be gone if the VM is re-imaged. This can happen if a hardware failure occurs on the machine where the VM is hosted.

Windows Azure persistent storage

Figure 1 - Windows Azure Stateless VM and Persistent Storage

Instead, the recommended approach is to store data to dedicated persistent storage such as SQL Azure or Windows Azure Storage.

Now, let’s discuss each technique to install software on Windows Azure in more detail.

Technique 1: Manual Installation through RDP

The first technique we discuss here is the fastest and easiest, but unfortunately also the most fragile. The idea is to perform a remote desktop (RDP) to a specific instance and perform manual installation. This might sound silly to some of you as we just discussed the stateless VM above. Nonetheless, this technique is pretty useful in staging or testing environments, when we need to quickly assess if a specific software can run in a Windows Azure environment.

The Catch: The software installed will not be persistent.

NOTE: Do not use this technique in production.

Technique 2: Start-up Task

The second technique we cover here is a Start-up Task. In my opinion, this will probably be the best solution depending on your circumstances. The idea of a Start-up Task is to execute a script (in form of a batch file) prior to the role initialization. As it will be always executed prior role initialization, even if the instance is re-imaged it will still be executed.

How to?

1. Preparing your startup script

Create a file name startup.cmd using Notepad or other ASCII editor. Copy the following example and save it.

powershell -c “(new-object
system.net.webclient).downloadfile(”http://download.microsoft.com/download/E/A/1/EA1BF9E8-D164-4354-8959-F96843DD8F46/ReportViewer.exe”, ” ReportViewer.exe”)
ReportViewer.exe /passive
  • The first line is to download a file from the given URL to local storage.
  • The second line is to run the installer “ReportViewer.exe” using passive mode. We should install using passive or silent mode so there aren’t any dialog pop-up screens. Please also note that each installer may have different silent or passive mode installation parameter.

2. Including startup.cmd to Visual Studio

The next step is to include your startup.cmd script to Visual Studio. To do that, simply right click on the project name and choose “Add Existing Item”. Browse the startup.cmd file. Next, set “Copy to Output Directory” to “Copy always”, to ensure that the script will be included inside your package when it is built.

Including Startup.cmd in the Service

Fiure 2 - Incuding startup.cmd in the Service

3. Adding Startup Task on your ServiceDefinition.csdef file

The final step is to add a startup section in ServiceDefinition.csdef file, specifically below the intended Role tag as illustrated in below figure.

Adding Startup Task in ServiceDefinition.csdef

Figure 3 – Adding Startup Task in ServiceDefinition.csdef

  • The commandLine attribute requires the path of our startup script
  • The executionContext attribute requires us to choose either:
    • elevated (which will run as admin-role) or
    • limited (non admin-role)
  • The taskTypehas following options:
    • Simple [Default] – System waits for the task to exit before any other tasks are launched
    • Background – System does not wait for the task to exit
    • Foreground – Similar to background, except role is not restarted until all foreground tasks exit
The Catches

Here are some situations where a startup task cannot be used:

  1. Installation that cannot be scripted out
  2. Installation that requires many user involvement
  3. Installation that takes a very long time to complete
Technique 3: VM Role

The final technique we are looking at is VM Role. In fact, one of the reasons why Microsoft introduced VM Role is to address the issues that couldn’t be done by Startup Task.

In reality, VM Role is another option amongst Windows Azure Compute Roles. However, unlike Web and Worker Roles, you will have more responsibility when using VM Role. People often make the mistake of treating VM Role as IAAS. This is not appropriate as VM Role still inherits behaviors from Web and Worker Roles. VM Role still can be easily scaled out just like Web and Worker Roles. Similarly, storing data in VM Role’s local disk is considered non-persistent.

The following figure illustrates the lifecycle of VM Role.

Figure 4 – VM Role Lifecycle from the Windows Azure Platform Training Kit. Find the whole PowerPoint presentation here: http://acloudyplace.com/wp-content/uploads/2012/05/MovingApplicationsToTheCloudWithVMRole.pptx

Let’s drill down to the first step “Build VM Image” in more detail. There are several tasks that should be done here. First of all is to create the VHD that contains the operating system. The next step is to install Windows Azure Integration Component onto the image. Subsequently, you can install and configure the third party software. Finally, you do a SysPrep to generalize the VM image.

The Catches

There are several catches when using VM Role:

  1. You will have more responsibility when using VM Role, including: building, customizing, installing, uploading, and eventually maintaining the VM image.
  2. Up to now, the only supported OS for VM Role is Windows Server 2008 R2.
  3. At the time of writing this article, VM Role is still at beta. As we know, significant changes may happen to the beta product.
Conclusion

We have covered three techniques to install software in Windows Azure so far. Although, Startup task remains the recommended option in most cases, it may not be the most suitable all the time. RDP and VM Role can sometimes be advantageous depending on the scenario.

Reference

Full disclosure: I’m a paid contributor to ACloudyPlace.com.


Mary Jo Foley (@maryjofoley) reported Microsoft: We have 'high tens of thousands' of Azure customers in a 5/8/2012 post to ZDNet’s All About Micorosoft blog:

imageMicrosoft has been careful when sharing information about its Windows Azure customer counts. In 2010, the Redmondians said they had 10,000 Azure customers. In 2011, it was 31,000. (Microsoft officials declined to say if any of these were Microsoft users and how many were paying customers.)

On May 8, Microsoft Corporate Vice President of Azure Marketing Bob Kelly provided Merrill Lynch Technology Conference attendees with another tally tidbit. Kelly said Microsoft now has “high tens of thousands of customers” for Windows Azure, with “hundreds” of new customers being added daily. Again, Microsoft didn’t (and won’t) say whether this total includes any Microsoft customers and/or whether all of these customers are paying users.

(I got these datapoints from Directions on Microsoft analyst Rob Helm, who tweeted Kelly’s remarks. I was unable to listen to Kelly’s talk due to technical difficulties with the webcast. Update (May 9): I finally was able to listen to the webcast, and can verify Kelly said everything in Helm’s tweets.)

The latest customer count wasn’t the only Azure news Kelly shared. He also said, again, according to Helm’s tweet, that Microsoft’s Dynamics CRM service will be hosted on Windows Azure before the end of calendar 2012. Currently, Dynamics CRM is hosted in Microsoft datacenters, and pieces of it are already on Azure, but the core service itself is not hosted on Azure. Microsoft officials have said repeatedly that the company planned to move Dynamics CRM to Azure but have consistently declined to say when. (I asked again today but so far no response from the CRM team.)

We already know that Office 365 uses Windows Azure Active Directory for single sign-on/identity purposes. Unsurprisingly, based on Helm’s tweet of Kelly’s remarks, it sounds like Dynamics CRM will, as well, once it is hosted on Windows Azure.

Kelly also mentioned Microsoft’s plans to add infrastructure as a service (IaaS) elements to its Azure platform. Right now, Azure is almost a pure platform as a service (PaaS) cloud platform. But as those who study roadmaps (and those who love them) know, Microsoft has been talking about plans to add IaaS elements to Windows Azure for years. Anybody remember talk of server app-virtualization, with the idea of allowing customers to package existing apps/environments in virtual machines?


More recently, Microsoft’s plans to add a persistent virtual machine capability – allowing users to host SharePoint, SQL Server and Linux (!) on Azure — made a reappearance on Redmond’s roadmap. So far, there’s been no public sightings of the expected persistent VM, but who knows… maybe a spring release of Windows Azure could bring that promised capability to testers….


RealWire News Distribution reported “Epicor to Sharpen Its Next-Generation Epicor ICE Business Architecture with Windows Azure and SQL Azure” in an introduction to its Epicor to Bring On-Demand ERP to the Windows Azure in a 5/8/2012 news release of 5/8/2012:

imageEpicor Software Corporation, a global leader in business software solutions for manufacturing, distribution, retail and services organizations, today announced a strategic alliance with Microsoft Corp. whereby Epicor® will use Windows Azure to expand on-demand delivery of its next-generation enterprise resource planning (ERP) suite worldwide. The Epicor ICE Business Architecture will also benefit from Windows Azure and SQL Azure.

imageEpicor and Microsoft share a common vision for cloud computing and building business applications that are delivered the way people want to work; virtually any place, any time on any device. With the open and flexible Windows Azure cloud platform, applications can be quickly built, deployed and managed across a global network of Microsoft-managed datacenters - using virtually any language, tool or framework.

Unveiled at Insights 2012, the third major release of Epicor ICE will provide platform as a service (PaaS) and infrastructure as a service (IaaS) offerings using Windows Azure. The flexible service-oriented architecture (SOA) of Epicor ICE provides connectivity across a wide range of Epicor business software solutions for the manufacturing, distribution, and services industries, enabling more customers to leverage Epicor Extend solutions delivered via the cloud.

Epicor first introduced Epicor ERP, its breakthrough next-generation enterprise business software solution in 2008. The solution, designed for growing companies in domestic and global markets, was built on Epicor ICE, which fused modern Web 2.0 technologies with Epicor True SOA™ - a flexible, standards-based software architecture for creating and defining business process as reusable "services" to deliver an enabling business platform that offered new levels of flexibility, usability, and agility in support of application-to-application integration and business-to-business collaboration.

"A consistent leader in adopting forward-looking technologies from Microsoft, Epicor is now ready to take advantage of Windows Azure and SQL Azure to deliver a pre-configured virtual environment that simplifies implementation and enables more customers to use the cloud," said Walid Abu-Hadba, corporate vice president, developer and platform evangelism, at Microsoft. "The power of Windows Azure and SQL Azure will give Epicor customers and partners even more choice, flexibility and scalability with Epicor business software solutions - whether delivered on-premise, hosted or in the cloud."

"Epicor was an early adopter of Windows Azure at its inception," said Erik Johnson, vice president, product marketing for Epicor. "And we have worked with a number of Epicor ERP customers to test the benefits of Windows Azure and SQL Azure, which we view as the next game-changers for cloud computing. We are pleased with the performance, scalability and flexibility of these technologies and look forward to leveraging the benefits of Windows Azure to provide our customers with extended capabilities and rich consumer-driven experience today's end users require."

News Facts

  • Epicor and Microsoft align to bring on-demand ERP to Windows Azure cloud
  • Epicor ICE will provide platform as a service (PaaS) and infrastructure as a service (IaaS) offerings using Windows Azure

  • Flexible business architecture to provide connectivity across a range of Epicor business software solutions via the cloud

About Epicor Software Corporation
Epicor Software Corporation is a global leader delivering business software solutions to the manufacturing, distribution, retail and services industries. With nearly 40 years of experience serving midmarket organizations and divisions of Global 1000 companies, Epicor has more than 20,000 customers in over 150 countries. Epicor enterprise resource planning (ERP), point of sale (POS), supply chain management (SCM), and human capital management (HCM) enable companies to drive increased efficiency and improve profitability. With a history of innovation, industry expertise and passion for excellence, Epicor inspires customers to build lasting competitive advantage. Epicor provides the single point of accountability that local, regional and global businesses demand. The Company's headquarters are located in Dublin, California, with offices and affiliates worldwide. For more information, visit www.epicor.com.


Brian Loesgen (@BrianLoesgen)posted a Video case study: PrivacyCentral uses Ruby on Azure on 5/8/2012:

imageI’ve been busy making videos of a few of the ISVs that I’ve had the pleasure of working with. This is the first of those video case studies.

The video is available here. Enjoy!


imageMost consumers are unaware of the extent of publicly available, online exposures of their information – including phone numbers, current and past addresses, birth dates, home values, income level, religion, and relative names (including mother's maiden name) – all of which can easily be used for identity theft and other cybercrimes.

imageLike an antivirus for consumer privacy, PrivacyCentral exposes privacy threats to consumers, provides analysis and reports via a risk profile and risk score, and then empowers consumers with the ability to remove detected threats and monitor for future threats.

In this video, Privacy Central's CEO and founder, Zoiner Tejada, speaks with Microsoft Principal Architect Evangelist Brian Loesgen. Tejada discusses the benefits Privacy Central realized from building on the Azure platform, and how they run their Ruby crawlers on Azure. Tejada also goes on to share some lessons learned along the way during their development cycle, and offers up some tips for people new to the platform.

About Privacy Central

Privacy Central is a San Diego CA-based startup and is a member of Microsoft's BizSpark program, it is the first service to allow the consumer to regain full control over their sensitive information.

PrivacyCentral is dedicated to protecting consumer privacy, with powerful tools designed to make protection simple.

About BizSpark

Microsoft BizSpark is a global program that helps software startups succeed by giving them access to Microsoft software development tools, connecting them with key industry players, including investors, and providing marketing visibility to help entrepreneurs starting a business


Mary Jo Foley (@maryjofoley) also reported Microsoft to eliminate its Azure branding in billing portal (emphasis added) on 5/8/2012:

imageMicrosoft is informing customers of its Windows Azure cloud that it is rebranding many, if not all, of the component services in a way that eliminates the “Azure” name on its billing portal.

“Windows Azure Compute” will now be known simply as “Cloud Services,” according to the Microsoft officials. SQL Azure is now known as “SQL Database.” Here’s the full list of what’s being rechristened:

(click on the table above to enlarge)

imageI’m wondering whether the Softies also will be renaming the still-yet not formally announced Windows Azure Active Directory to plain old “Active Directory.”

Update: Still no word from any Microsoft officials on this, but hearing from others that the rebranding may be limited to the billing portal only and won’t be applied externally. I’ve asked for and am still hoping to get an official response at some point.

Update No. 2: A Microsoft spokesperson responded with this statement: “Microsoft continues to invest in the Windows Azure brand and we are committed to delivering an open and flexible cloud platform that enables customers to take advantage of the cloud. We have no additional information to share at this time.” A Microsoft spokesperson asked me to replace the original statement with a more direct one: “Today we informed customers that we simplified the naming of services in our billing statements. This does not affect the Windows Azure brand or name.”

(A couple of my contacts are saying the real reason Microsoft made these changes was actually to emphasize the Azure uber-brand. Not that you can tell that from the customer mail that went out or from the official statement, but that’s supposedly the grand plan, for what it’s worth.)

Update No. 3: Now the @WindowsAzure twitter account is getting into the act. Here’s the clearest update about the naming change yet: “Per our recent customer letter, we r simplifying service naming in billing statements. This doesn’t affect the Windows Azure name or brand.”

Now back to the original story.

“In the coming weeks, we will update the Windows Azure Service names that appear in the usage records you download. These are only name changes – your prices for Windows Azure are not impacted,” according to the note accompanying the table above.

One Azure user said he believed Microsoft’s goal with the change was to align its on-premises and cloud services better.

Microsoft’s stance — almost since 2008, when Windows Azure was still known by its codename “Red Dog” — is that its on-premises Windows products all have cloud complements. This mirroring has been at the crux of Microsoft’s private/public/hybrid cloud positioning, meaning its customers are free to mix and match its on-premises and cloud wares in ways that best suit their businesses.

Microsoft combined its Server and Cloud teams into a single unit in late 2009.

It’s been a busy couple of weeks for the naming police at Microsoft. Last week, Microsoft announced it would be doing away with its Windows Live branding. The company also is renaming some of its components of its Microsoft Advertising platform as “Bing.”

As part of its Azure portal rebranding move, the Azure team also updated its privacy policy, according to the note sent to customers this week. “The new version includes the same commitments we previously made to maintain the privacy of your personal information, while adding more detailed information,” the note said.

Before Mary Jo’s updates appeared, her post launched a torrent of breathless stories about Microsoft abandoning the “Windows Azure” brand.


Himanshu Singh (@himanshuks) posted Real World Windows Azure: Interview with Adrian Gonzalez, Technology Manager for the San Diego County Public Safety Group on 5/8/2012:

imageAs part of the Real World Windows Azure series, I caught up with Adrian Gonzalez, Technology Manager for the San Diego County Public Safety Group to learn more about how the county uses Windows Azure to ensure its emergency information website is disaster-ready. Read San Diego County Public Safety Group’s success story here. Read on to find out what he had to say.

Himanshu Kumar Singh: Tell me about the San Diego County Emergency Site.

Adrian Gonzalez: San Diego County in California provides emergency, justice, health, and social services to its 3 million residents and municipal services to its unincorporated areas. The San Diego County Emergency Site is produced by the County of San Diego to provide information before, during and after disasters and is the official source of information from the County during a large-scale emergency, providing a variety of recovery information for people affected by a major disaster.

HKS: How did the site perform during large-scale emergencies?

AG: In October 2007, a firestorm ravaged Southern California, consuming 370,000 acres in San Diego County, forced the evacuation of 515,000 residents. This firestorm challenged the county to disseminate emergency information faster and more broadly than ever before. With three major universities plus famed destination spots, San Diego attracts students and vacationers from around the world, all of whom had friends and relatives back home scouring the Internet for information about their welfare during the fires.

Many of those people went to the county’s website. Many more went there after CNN linked to it from cnn.com. The site’s traffic rose to 12,000 page views per hour and crashed, and it took several days to re-launch the site, at a time when every moment counted.

HKS: How did these issues shape your planning for what the site needs to be built to handle?

AG: We were determined not to be caught in a similar bind again. We wanted an online presence that could scale to handle those 12,000 page views and more because we couldn’t know how much traffic we’d get the next time, but we knew it would be more. Mobile phones hadn’t been a major factor in 2007, but they were quickly becoming an ever-larger one. We had to be ready for the future.

We also wanted to address other limitations, such as a lack of visual features and a time-consuming update process – which is the opposite of what we needed in an emergency services site.

HKS: What solutions did you evaluate?

AG: To close these gaps, we looked at building out our two-server site to support 120,000 page views per hour—10 times the load that had brought down the original site. The cost was high: around US$350,000 to build a data center, plus $80,000 per year to maintain it.

We also considered a cloud-computing platform hosted in data centers across the Internet, which could mitigate the capital expense and scalability issues and first looked at Amazon Elastic Compute Cloud but realized we would still be responsible for continuing maintenance if we chose that service.

imageThen I saw a demonstration of the Miami-Dade County 311 information system, which was hosted on Windows Azure. They had solved the same workflow and traffic-spike issues that we faced using Microsoft cloud services, and the operating costs appeared to be minimal.

HKS: How did you proceed with Windows Azure?

AG: We engaged Adxstudio, a Microsoft Partner Network member with multiple Gold competencies to test the ability of Windows Azure to support the county’s scalability goals on a simulated site, even hitting that site from third-party sources around the world. Windows Azure passed the test easily, and the county commissioned Adxstudio to build its new emergency services website on the Microsoft cloud platform.

HKS: Tell me more about the solution Adxstudio built.

AG: Adxstudio used its flagship product—Adxstudio Portals for Microsoft Dynamics CRM, built on the Microsoft .NET Framework—to construct the scalable, content-managed website. The new emergency services website supports live, streaming video; Twitter and RSS (Really Simple Syndication) feeds; Bing Maps for navigating threats and resources; and location-based information on, for example, the nearest shelters.

HKS: What are some of the benefits you’ve seen from moving to Windows Azure?

AG: With Windows Azure, we got the scalability we sought, and then some. We’ve achieved our goal to support 120,000 page views per hour and use only three Windows Azure instances to do so. When we saw Windows Azure exceeding our scalability goals by a factor of 162 times, we thought that would be high enough. We’re completely comfortable with the ability of Windows Azure to meet our needs, no matter how fast those needs grow.

In addition, the portal delivers more information than the previous website did, while making that information easier for county personnel to update and users to find. Because the portal is hosted in the cloud, it can be updated from anywhere with an Internet connection, without needing virtual private network connections to the county network.

Online maps and data such as shelter status can be updated automatically and in near real time; these formerly manual processes used to take anywhere from minutes to hours to implement.

HKS: What about the cost savings?

AG: We needed Windows Azure to be as cost-effective as it is scalable, and it is. In contrast to the $350,000 we might have spent to build an on-premises solution, we’ve avoided capital investment with Windows Azure but instead pay only for what we use. This comes to about $18,000 a year for non-emergency use, compared to $80,000 a year to maintain an on-premises solution: a savings of about 78 percent. I estimate that emergency use would bring the fee up to only about $7,000 for the month in which the emergency occurred.

Read how others are using Windows Azure.


Manu Cohen-Yashar (@ManuKahn) described Hosting Classic ASP on Azure in a 5/7/2012 post:

imageIs it possible to run a classic Asp site on Windows Azure? Of Course it is, Anything that runs on IIs can be hosted in Azure.

So How do we do it?

  1. Create a simple asp page in notepad (e.g. using http://support.microsoft.com/kb/301097)
  2. Create startup.cmd to install ASP engine
    • start /w pkgmgr /iu:IIS-AS
  3. Create a ASP .Net Web role and modify csdef to include a startup task:
    • <Startup>
    • <Task commandLine="startup.cmd" executionContext="elevated" taskType="simple" />
    • </Startup>
  4. Add both files to web role - change properties to 'Content' and 'Copy if New'.
  5. Build and Publish solution to Azure

imageMore info:

Windows Azure (How-to enable classic ASP support).


M. Sheik Uduman Ali (@Udooz) described a Circuit Breaker for Windows Azure in a 5/6/2012 post:

imageNo application is in island. Every application needs to interact with other applications located in remote, or consumes data stored in remote. Your application should be cautious and handle instability situations while interacting with these remote endpoints.

imageVarious practices and patterns are available for implementing a stable system. Michael T. Nygard specifies following stability patterns when accessing remote endpoints in his Release It! Book:

  • Timeout – don’t wait for the response against a request after the given time limit
  • Retry – strategically request repeatedly until success
  • Circuit Breaker – fail fast if remote refuses and prevent re occurrence

These patterns are very much required for applications hosted in cloud. Azure managed library implements first two patterns on storage service APIs. This post explains how and when to use Circuit Breaker pattern in Azure.

Problem

Generally, a remote endpoint access is happened across the system. When accessing a remote endpoint, the reliability of the connection might not be consistent. The timeout and retry policy help to handle this failure, if it has happened for a particular request or very short time connection refuses. However, there are some situations like cloud services outage or remote endpoint under maintenance, where time out and retry logic could not be a real rescue. Instead, a quick fail detection mechanism helps the various access points in the system to react quickly. This would avoid unnecessary remote invocations.

Example

Let us take an example. There is an online flight reservation system hosted in Azure. It uses various flight operators’ databases through their WCF services to determine the availability. It stores its customer de-tails and their booking information on SQL Azure as depicted in figure below:


The Flight Reservation System (FRS) should take care of following failures when interacting with these re-mote resources:

  • The flight availability query services (Flight A and Flight B) are unavailable daily between 11:30PM and 11:55PM.
  • Flight B operator has provided very low SLA, hence frequent connection refuses happened with the system
  • It is uncommon for SQL Azure outage, but the system should handle it.
  • Sometime, a specific Azure data center responds slowly, at that time the system should handle it.

In some cases, subsystem of an application may create, update and delete set of blobs or queue messages. Another subsystem of the application may require these resources. Leaving this as it is may results unreliable system.

Forces
  • Fail fast and handle it gracefully
  • Prevent reoccurred request to a refused remote invocation
Solution

The circuit breaker component keeps the recent connection state information for a remote endpoint globally across the system. It behaves like our residential electrical fuses. Initially the circuit is in closed state. If the number of attempt to connect to the remote resource getting failed (retry), circuit breaker will open the circuit to prevent succeeding invocations for a while. This is called as “trip broken” and circuit breaker is now in open state. After some time later (threshold time), when a new request made, circuit breaker halfly open the circuit (means tries to made actual connection to the remote), if it is success then close the circuit, otherwise open it. The attempt and resume policy is global for a remote endpoint. Hence, unique circuit breaker should exist for every remote endpoint. The conceptual diagram below depicts this.

Behavior

The sequence diagram below explains the typical circuit breaker behavior.

(click the above diagram for full view)

“Timeout?()”method specifies the connection timeout. Number of attempt before moving to open state not mentioned in this diagram. The AttemptReset() method in half open state will happen when a request has been made after some time while circuit breaker is in open state. This time to make half open state is called as threshold time.

The diagram below shows the various state of the circuit breaker for a remote resource.

Implementation and Example

I am started developing a circuit breaker library for Windows Azure, with the following capabilities:

  • Handle various types of remote invocation happens in a typical Azure application like Azure storage services, SQL Azure, Web Request, WCF service invocation.
  • Automatically find and react to the exceptions those are relevant for circuit breaker concept like TimeoutException for WCF’s CommunicationObject
  • All the remote resources are managed by their URIs including differentiating the resources by their sub URIs.
  • Instead of singleton circuit breaker for a remote resource, maintaining the state for a resource in persistence store like Azure cache, table storage, blob storage.
  • Allow to define circuit breaker policy for a remote resource globally.
  • Log the open and half open state of the circuit breaker instances
  • Allow to define global “Failure Handling Strategy” for a remote resource

In this post, I have used the limited scope of Azure Circuit Breaker for easier understanding. I have a vanilla ASP.NET MVC3 application and a hello world WCF service; both are in same hosted services. The code for WCF service is shown below:

public class HelloService : IHelloService
{
public string Greet(string name)
{
return string.Format("Hello, {0}", name);
}
}

I have hosted this service on a worker role and opened TCP/IP port for internal access. For the demon-stration purpose, I have open this service host one minute and then closed in the WorkerRole’s Run() method as shown below:

using (ServiceHost host = new ServiceHost(typeof(HelloService)))
{
// service host initialization code
// removed for clarity
host.AddServiceEndpoint(typeof(IHelloService), new NetTcpBinding(SecurityMode.None), endpointurl, new Uri(listenurl));
host.Open();
while (true)
{
Thread.Sleep(TimeSpan.FromMinutes(1));
break;
//Trace.WriteLine("Working", "Information");
}
}

The circuit breaker policy has been defined in MVC3 app’s Global.asax.cs as shown below:

CbPolicyBuilder
.For("net.tcp://localhost:9001/HelloServiceEndpoint")
.Timeout(TimeSpan.FromSeconds(30))
.MaxFailure(1).OpenTripFor(TimeSpan.FromSeconds(30))
.Do();

As I mentioned, the policy is defined against remote resource URI. Here, for net.tcp://localhost:9001/HelloServiceEndpoint resource, if the invocation is not successful or no response till 30 seconds (Timeout) attempt only once (MaxFailure) and keep the circuit breaker open for 30 seconds. After 30 seconds, half-open the circuit breaker, when any connection made. The policy will be persisted on persistence store and accessed across the application.

The MVC3 app has two controllers named HomeController and AuthorController where this service has been invoked using circuit breaker as shown below

//specify the resource access type, here ChannelFactory<T>
CircuitBreaker<ChannelFactory<IHelloService>>
// the resource access type instance
.On(new ChannelFactory<IHelloService>(helloServiceBinding, epHelloService))
// made remote invocation
.Execute<string>(cf =>
{
var helloClient = cf.CreateChannel();
return helloClient.Greet("Udooz!");
},
// if everything goes well
msg => ViewBag.Message = msg,
// oops, circuit trip broken
ex =>
{
ViewBag.Message = ex.Message;
});

The same code has been there in AuthorController. I don’t give any link to access the Index() action of this controller in the page. Test yourself by giving the URL on the browser.

Final Note

You can download the above sample from http://udooz.net/file-drive/doc_details/25-azurecircuitbreaker.html. It contains the basic CircuitBreaker library also. This post does not cover those aspects. The code has basic design aspects to implement CircuitBreaker for Azure, but does not has production ready state persistence repository implementation and other IoC aspects. The sample uses in-memory state persistence (hence per web/worker role state) and supports WCF ChannelFactory type.

I shall announce the production-ready library once it is completed


Tomasz Janczuk (@tjanczuk) updated his Hosting node.js applications in IIS on Windows post of 8/26/2011 on 4/24/2012 (Missed when published):

imageThis post is updated as of April 24, 2012: iisnode v0.1.18 and node.js v0.6.15
Pусский перевод (Russian translation).

imageIn this post I am discussing hosting node.js appplications in IIS on Windows using the iisnode project I have been lately working on.

What benefits does iisnode provide?

The iisnode project provides a native IIS 7.x module that allows hosting of node.js applications in IIS 7.x and IIS 7.x Express (WebMatrix). The project utilizes the Windows build of node.exe.

Some of the advantages of hosting node.js applications in IIS using the iisnode module as opposed to self-hosting node.exe processes include:

  • Process management. The iisnode module takes care of lifetime management of node.exe processes making it simple to improve overall reliability. You don’t have to implement infrastructure to start, stop, and monitor the processes.
  • Scalability on multi-core servers. Since node.exe is a single threaded process, it only scales to one CPU core. The iisnode module allows creation of multiple node.exe processes per application and load balances the HTTP traffic between them, therefore enabling full utilization of a server’s CPU capacity without requiring additional infrastructure code from an application developer.
  • Auto-update. The iisnode module ensures that whenever the node.js application is updated (i.e. the script file has changed), the node.exe processes are recycled. Ongoing requests are allowed to gracefully finish execution using the old version of the application, while all new requests are dispatched to the new version of the app.
  • Integrated debugging. The iisnode module is fully integrated with the node-inspector debugger. Node.js applications can be debugged remotely from any WebKit-based browser without any additional configuration or server side process creation.
  • Access to logs over HTTP. The iisnode module provides access the output of the node.exe process (e.g. generated by console.log calls) via HTTP. This facility is key in helping you debug node.js applications deployed to remote servers.
  • Side by side with other content types. The iisnode module integrates with IIS in a way that allows a single web site to contain a variety of content types. For example, static content (HTML, CSS, images, and client side JavaScript files) can be efficiently handled by IIS itself, while node.js requests are handled by iisnode. A single site can also combine PHP applications, ASP.NET applications, and node.js. This enables choosing the best tools for the job at hand as well progressive migration of existing applications.
  • Minimal changes to node.js application code. The iisnode module enables hosting of existing HTTP node.js applications with very minimal changes. Typically all that is required is to change the listed address of the HTTP server to one provided by the iisnode module via the process.env.PORT environment variable.
  • Integrated management experience. The issnode module is fully integrated with IIS configuration system and uses the same tools and mechanism as other IIS components for configuration and maintenance.

In addition to benefits specific to the iisnode module, hosting node.js applications in IIS allows the developer to benefit from a range of IIS features, among them:

  • port sharing (hosting multiple HTTP applications over port 80)
  • security (HTTPS, authentication and authorization)
  • URL rewriting
  • compression
  • caching
  • logging
Hello World

Follow the installation instructions at the iisnode project site to get the module and samples installed on your Windows box with IIS7 enabled.

The hello world sample consists of two files: hello.js and web.config.

This is the hello.js file from the helloworld sample:

var http = require('http');

http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello, world! [helloworld sample]');
}).listen(process.env.PORT);

You will notice that the only difference between this code and the hello world sample from the front page of http://nodejs.org is in the specification of the listening address for the HTTP server. Since IIS controls the base address for all HTTP listeners, a node.js application must use the listen address provided by the iisnode module through the process.env.PORT environment variable rather than specify its own.

The web.config file is required to instruct IIS that the hello.js file contains a node.js application. Otherwise IIS would consider this file to be client side JavaScript and serve it as static content. The web.config designates hello.js as a node.js application by scoping the registration of the handler in the iisnode module to that file only:

<configuration>
<system.webServer>
<handlers>
<add name="iisnode" path="hello.js" verb="*" modules="iisnode" />
</handlers>
</system.webServer>
</configuration>

This handler registration allows the same web site to contain other *.js files (e.g. jQuery libraries) that IIS will continue serving as static files.

What else can iisnode do?

For those familiar with self-hosting node applications, iisnode combines the benefits of cluster, supervisor, node-inspector, forever, and node-static in a single package.

Scalability on multi-core servers

For every node.js application (e.g. hello.js above), iisnode module can create many node.exe process and load balance traffic between them. The nodeProcessCountPerApplication setting controls the number of node.exe processes that will be created for each node.js application. Each node.exe process can accommodate a configurable number of concurrent requests (maxConcurrentRequestsPerProcess setting). When the overall concurrent active request quota has been reached for an application (maxConcurrentRequestsPerProcess * nodeProcessCountPerApplication ), the iisnode module starts rejecting new HTTP requests with a 503 (Server Too Busy) status code. Requests are dispatched across multiple node.exe processes serving a node.js application with a round-robin load balancing algorithm.

Auto-update

Whenever the JavaScript file with a node.js application changes (as a result of a new deployment), the iismodule will gracefully upgrade to the new version. All node.exe processes running the previous version of the application that are still processing requests are allowed to gracefully finish processing in a configurable time frame (gracefulShutdownTimeout setting). All new requests that arrive after the JavaScript file has been updated are dispatched to a new node.exe process that runs the new version of the application. The watchedFiles setting specifies the list of files iisnode will be watching for changes.

Changes in the JavaScript file are detected regardless if the file resides on a local file system or a UNC share, but the underlying mechanisms are different. In case of a local file system, an OS level directory watching mechanism is used which provides low latency, asynchronous notifications about file changes. In case of files residing on a UNC share, file timestamps are periodically polled for changes with a configurable interval (uncFileChangesPollingInterval setting).

Integrated debugging

With iisnode integrated debugging you can remotely debug node.js application using any WebKit-enabled browser. Read more about iisnode integrated debugging.

Access to logs over HTTP

To help in ‘console.log’ debugging, the iisnode module redirects output generated by node.exe processes to stdout or stderr to a text file. IIS will then serve these files as static textual content over HTTP. Capturing stdout and stderr in files is controlled with a configuration setting (loggingEnabled). If enabled, iisnode module will create a per-application special directory to store the log files. The directory is located next to the *.js file itself and its name is created by concatenating the *.js file name with a configurable suffix (logDirectoryNameSuffix setting). The directory will then contains one text file per node.exe process dedicated to running this node.js application; these files are named 0.txt, 1.txt, etc. with the file name being the ordinal number of the process serving this application. The logs can be accessed from the browser using HTTP: given a node.js application available at http://mysite.com/foo.js, the output of the first process serving this application would by default be located at http://mysite.com/foo.js.logs/0.txt.

The logDirectoryNameSuffix is configurable to allow for obfuscation of the log location in cases when the service is publicly available. In fact, it can be set to a cryptographically secure or otherwise hard to guess string (e.g. GUID) to provide a pragmatic level of logs privacy. For example, by setting logDirectoryNameSuffix to ‘A526A1F2-4E22-4488-B930-6A71CC7649CD’ logs would be exposed at http://mysite.com/foo.js.A526A1F2-4E22-4488-B930-6A71CC7649CD/0.txt.

Log files are not allowed to grow unbounded. The maxLogFileSizeInKB setting controls the maximum size of a log file. When the log grows beyond that limit, iisnode module will truncate it.

Existing log files can either be appended to or created empty (log file names for processes with the same ordinal number are the same). This is controlled by the appendToExistingLog configuration setting. Lastly, the logFileFlushInterval setting controls how frequently the log file is flushed to disk.

Side by side with other content types

One of the more interesting benefits of hosting node.js applications in IIS using the iisnode module is support for a variety of content types within a single web site. Next to a node.js application one can host static HTLM files, client side JavaScript scripts, PHP scripts, ASP.NET applications, WCF services, and other types of content IIS supports. Just like the iisnode module handles node.js applications in a particular site, other content types will be handled by the registered IIS handlers.

Indicating which files within a web site are node.js applications and should be handled by the iisnode module is done by registring the iinode handler for those files in web.config. In the simplest form, one can register the iisnode module for a single *.js file in a web site using the ‘path’ attribute of the ‘add’ element of the handler collection:

<configuration>
<system.webServer>
<handlers>
<add name="iisnode" path="hello.js" verb="*" modules="iisnode" />
</handlers>
</system.webServer>
</configuration>

Alternatively, one can decide that all files in a particular directory are supposed to be treated as node.js applications. A web.config using the <location> element can be used to achieve such configuration:

<configuration>
<location path="nodejsapps">
<system.webServer>
<handlers>
<add name="iisnode" path="*.js" verb="*" modules="iisnode" />
</handlers>
</system.webServer>
</location>
</configuration>

One other approach one can employ is to differentiate node.js applications from client side JavaScript scripts by assigning a file name extension to node.js applications other than *.js, e.g. *.njs. This allows a global iisnode handler registration that may apply across all sites on a given machine, since the *.njs extension is unique:

<configuration>
<system.webServer>
<handlers>
<add name="iisnode" path="*.njs" verb="*" modules="iisnode" />
</handlers>
</system.webServer>
</configuration>

URL Rewriting

The iisnode module composes very well with the URL Rewriting module for IIS. URL rewriting allows you to normalize the URL space of the application and decide which IIS handlers are responsible for which parts of the URL space. For example, you can use URL rewriting to serve static content using the IIS’es native static content handler (which is a more efficient way of doing it that serving static content from node.js), while only letting iisnode handle the dynamic content.

You will want to use URL rewriting for majority of node.js web site applications deployed to iisnode, in particular those using the express framework or other MVC frameworks. Read more about using iisnode with URL rewriting.

Minimal changes to existing HTTP node.js application code

It has been the aspiration for iisnode to not require extensive changes to existing, self-hosted node.js HTTP applications. To that end, most applications will only require a change in the specification of the listen address for the HTTP server, since that address is assigned by the IIS as opposed to left for the application to choose. The iisnode module will pass the listen address to the node.exe worker process in the PORT environment variable, and the application can read it from process.env.PORT:

var http = require('http');

http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('I am listening on ' + process.env.PORT);
}).listen(process.env.PORT);

If you access the endpoint created by the application above, you will notice the application is actually listening on a named pipe address. This is because the iisnode module uses HTTP over named pipes as a communication mechanism between the module and the worker node.exe process. One implication of this is HTTPS applications will need to be refactored to use HTTP instead of HTTPS. For those applications, HTTPS can be configured and managed at the IIS level itself.

Configuration

The iisnode module allows many of the configuration options to be adjusted with the system.webServer/iisnode section of web.config. Below is the list of options (most of which were described above) with their default values. For detailed and most current description of the options check out the configuration sample.

<iisnode      
nodeProcessCommandLine="&quot;%programfiles%\nodejs\node.exe&quot;"
node_env="%node_env%"
nodeProcessCountPerApplication="1"
maxConcurrentRequestsPerProcess="1024"
maxNamedPipeConnectionRetry="3"
namedPipeConnectionRetryDelay="2000"
maxNamedPipeConnectionPoolSize="512"
maxNamedPipePooledConnectionAge="30000"
asyncCompletionThreadCount="0"
initialRequestBufferSize="4096"
maxRequestBufferSize="65536"
watchedFiles="*.js"
uncFileChangesPollingInterval="5000"
gracefulShutdownTimeout="60000"
loggingEnabled="true"
logDirectoryNameSuffix="logs"
debuggingEnabled="true"
debuggerPortRange="5058-6058"
debuggerPathSegment="debug"
maxLogFileSizeInKB="128"
appendToExistingLog="false"
logFileFlushInterval="5000"
devErrorsEnabled="true"
flushResponse="false"
enableXFF="false"
promoteServerVars=""
/>

Integrated management experience

The iisnode module configuration system is integrated with IIS configuration which allows common IIS management tools to be used to manipulate it. In particular, the appcmd.exe management tool that ships with IIS can be used to augment the iismodule configuration. For example, to set the maxProcessCountPerApplication value to 2 for the “Default Web Site/node” application, one can issue the following command:

%systemroot%\system32\inetsrv\appcmd.exe set config "Default Web Site/node" -section:iisnode /maxProcessCountPerApplication:2

This allows for scripting the configuration of node.js applications deployed to IIS.

Feedback

The iisnode project is open source on GitHub. I hope you will find it useful. Please report bugs, share ideas and experiences by leaving a comment here or through https://github.com/tjanczuk/iisnode/issues.

Read more

Debugging node.js applications with iisnode
URL rewriting and iisnode
Developing node.js applications in WebMatrix
Deploying node.js to Windows Azure using Windows Azure SDK for node.js
Using Event Tracing for Windows (ETW) to diagnose node.js applications deployed to iisnode
Overview of the architecture of iisnode


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Andrew Lader described How to Add Images and Text to your LightSwitch Applications (Andrew Lader) in a 5/8/2012 post to the Visual Studio LightSwitch Team blog:

Note: This article applies to LightSwitch in Visual Studio 11 (LightSwitch V2)

image_thumb1It’s common for developers to add static images and text to their screens to help guide their users through the application. Although you can easily add images that come a database to your screens, up till now, adding static images and labels has not been a straightforward task for LightSwitch developers in the past. With the upcoming release of LightSwitch in Visual Studio 11, you can add images and text to your screen without writing any code, or having to first add the images and/or text to your database. This new feature makes adding images and text a snap.

Adding an Image to a Screen

Let’s jump right in and see exactly how easy it is to add an image to a screen. Perhaps you want to add an image to complement a screen that creates new customers for the application. The LightSwitch project is quite simple, consisting of a single Customer table that has just two fields, first and last name. We’ll use the New Data screen template to add a new screen, specifying the Customers table as the screen data.

On this screen, you want to provide a standard image that users will easily recognize as ‘customers’. In the screen designer, pick a location where you want to add an image. For my example, I am going to add it to the Rows Layout that contains the FirstName and LastName elements in the content tree. When I click the Add menu button, the drop down menu is displayed. Notice two additional menus: Add Image… and Add Text…

StaticImageMenu

For now, let’s focus on the Add Image… menu. Click on that menu and you will see the Select Image dialog popup:

SelectImageDialog

This dialog may be familiar since it’s used to select the image for the application logo image, and the application icon. In this case, we’ll add a new image by clicking on the Import… button. This brings up the standard Open dialog. As a quick demonstration, let’s go to the Pictures library on your computer and select one of the sample pictures there (I have chosen the penguins image). Click OK on the Open dialog. Your Select Image dialog should now look like this:

SelectImageWithPenguinsDialog

As the image above illustrates, the penguins image is now added to the list on the left (and selected), and a preview is displayed on the right. Now click OK to accept this selection. Your content tree should look like this:

ContentTreeWithPenguinsImage

That’s it! The image is added to the content tree, despite not being a data-bound element. That’s all we need to do to add an image to our application. So, let’s try it out. What happens when we run it? Hitting F5 yields the following:

F5WithPenguins

As you can see from the image above, the penguins picture is placed just below the first and last name fields, and fills up the rest of the screen. As it turns out, the penguins image has a resolution of 1024x768, so no wonder it shows up so big.

Let’s change a few things. First, let’s find a more apropos image for this screen. I have a generic clip art image I am going to use, but you can use whatever image you wish. To specify a different image, first close the application and return to the screen designer. Making sure the image element in the content tree is selected, and examine the Properties window. You’ll notice there is a link called Choose Image… (circled in red below):

ImagePropertiesPane

Click on this link to bring the Select Image dialog back up. We’ll probably not use the penguins image in our application anymore, so go ahead and delete it by clicking on the Delete button. A dialog prompt asks if you want to continue; click OK. Now, choose the image you really want to use, and click OK to dismiss the dialog. And finally, drag the image element in the content tree to above the first name element:

ContentTreeWithusersImage

Now, let’s use the properties in the Properties window to make sure the image shows up the way we want. By default, the Horizontal Alignment is set to Left, and the Vertical Alignment is set to Top. These are fine. But the Width and Height are set to Auto. Let’s change these to the actual pixel size of our image. In my case, my image is 32x32, so I will set the Height and Width properties to be 32 pixels each.

While doing this, you may noticed another property called Show Border in the appearance category. By default, this is unchecked. If you wish, you can check this to add a border around your image. We’ll leave it unchecked for now. Hit F5, and you should see something like this (depending on the image you chose):

F5WithUsers

This looks much better.

Add some Text to a Screen

Now, we’re going to add some text next to our image. But first, we need to reorganize our screen a little bit. We’re going to have a Columns Layout above the Rows Layout where our first and last name elements are located. This will allow us to place our text beside our image. So under the screen’s top-level Rows Layout, add a Columns Layout, and drag it above the Rows Layout that contains the first and last name elements. then, drag the image element to this new Columns Layout. Your content tree should look like this:

ContentTreeWithColumns

Below the image element, click on the Add menu button, and select Add Text… from the drop down menu:

StaticTextMenu

The Edit Text dialog now appears. Enter some text like this:

EditTextDialog

Click OK to accept the text. Before we run this, let’s examine the properties for the text. In the Properties window, you’ll notice the familiar properties used to control the look and feel of text in LightSwitch. The default settings are good enough for our purposes right now, but let’s make one change. Set the Vertical Alignment to be Center.

Notice that like the image, there is a property called Edit Text… which will bring up the Edit Text dialog (see the circled item below). This will let you modify the text for this element.

TextPropertiesPane

There is one more change we need to make. Select the Columns Layout, go to the Properties window, and set the Vertical Alignment to Top. This will ensure that the Columns Layout does not take any more of the screen real estate then it needs.

Now click F5, and let’s see how our application looks:

F5WithUsersAndText

That’s all there is to it.

Wrapping Up

In this post, I have demonstrated how you can add images and text to your application. This feature has added new menu items to the screen designer which allow you to add images and text to your content tree. To add an image, simply select the new Add Image… menu item; this pops up a dialog that lets you choose which image to add. The image (which is not data-bound) is then added as a new element to the content tree. Likewise, when choosing the new Add Text… menu item, an Edit Text dialog is displayed to allow you to add text to the screen. The text (which is also not data-bound) is then added to the content tree. The properties for images and text that have been added to a screen can be modified in the Properties window, just as you would expect. Both the image and text elements will have links in the Properties window. Clicking these links will re-display their respective dialogs, allowing you to modify the image or text. Images have an additional property, Show Border, that allows you to add a border to your image. Really simple. I hope you are excited as I am by this new feature, and you start experimenting with adding images and text to your applications!


Beth Massi (@bethmassi) suggested that you Modernize Apps Look & Feel with Metro Studio and the Cosmopolitan Shell in a 5/8/2012 post:

imageOne of the design goals for LightSwitch in Visual Studio 11 was to modernize the look and feel of the UI. Back in March the team released the beta of the LightSwitch Cosmopolitan Shell and Theme. This theme and shell provides a bunch of improvements over the default one like displaying a logo at the top of the application, streamlined top-bar navigation menu, and a “Metro-style” command bar now located at the bottom of screens. This shell will become part of the product and the default shell and theme for new projects you build with LightSwitch in Visual Studio 11 at final release.

image_thumb1So I attempted to apply the Cosmo shell to my Contoso Construction application a while back and I got a couple remarks about how my icons were pretty much sticking out like a sore thumb. I used traditional colorful icons in the application and while that looks great with the standard shell, it doesn’t fit well with the Metro-style icons in the Cosmopolitan shell. Unfortunately I didn’t have time to redraw the icons -- and let’s face it, I’m not a designer.

Enter Syncfusion Metro Studio

Luckily there’s a FREE tool out there that you can use to quickly create Metro-style icons! Last night I was reviewing a LightSwitch e-book from Jan (to be released soon I hear!) and he mentioned this free tool so I had to go check it out, Syncfusion Metro Studio. And yes it’s free! Thank you Syncfusion for supporting the community (and particularly the developers who can’t draw!)

image

This tool comes with a bunch of icon templates you can use to get started designing your own Metro-style icons and it’s super easy to use.

image

Pick a template and then you can change the sizing, shape, foreground, and background color. You can also view & copy the XAML or save it as a PNG.

image

For LightSwitch applications that use the Cosmo shell, it’s best to go with a Transparent background so that the light grey of the command bar comes through. Although Metro Studio doesn’t let you choose a transparent background in the color picker, you can simply type it in the textbox (like I show above).

Contoso Construction Updated

It took me about 20 minutes to update all the icons in the Contoso Contruction application and here’s what I came up with. Not bad for not being a designer :-). You can download the updated sample here:

Download Contoso Construction - LightSwitch Advanced Sample (Visual Studio 11 Beta)

Contoso1sm

Syncfusion Metro Studio is a huge time-saver and all the icons you create are royalty-free even for commercial applications. Have fun “Metro-styling” your apps!


Jan Van der Haegen (@janvanderhaegen) posted Extensions Made Easy v1.12: Fuzzy searching on 5/8/2012:

imageOnce you’ve grabbed EME v1.12.1 or higher, you can explore a new functionality that I ported from a client project to the extension called Fuzzy searching… Although created months ago, published weeks ago, I never found the time to blog about this gimmick until minutes ago. Time to post a small example, in VB.Net

What is Fuzzy searching?

image_thumb1Suppose there’s a patient tracker application, written in LightSwitch. When I’m seeing the doctor, he has to enter my exact family name, or part of it, to find my record between the many Patient entities. Chances are, I’m seeing the doctor because I have a [sore] throat or allergic reaction on my mouth, and I’m not in the mood for spelling out my last name, again (I ALWAYS have to spell out my last name… Well, about 99% of the cases anyways, my wife is called Kundry Annys, she has to spell it out in 100% of all cases)… If the patient tracker application developer implemented fuzzy searching, he could make the doctor and his/her patients lives easier by implementing fuzzy searching…

Fuzzy searching is a technique where searching a list of items or entities is not done based on exact string match, but on “partial” or “metaphonic” match. Basically, if the user enters a search term like “Hagen”, it’s convenient in some very particular scenarios to return all entities with a property that sounds like “Hagen”, including “Haeghen”, “Haegen” and “Hägen”.

Although this isn’t something you’d want to do for each search screen, some user scenarios definitely justify the use of a Fuzzy search. Take my name for example: “Jan Van der Haegen”. Although it might sound like an exotic name to the non-Dutch speaking audience, my name is literally translated as “John of the Hedge”. You can imagine that this is quite a common family name in Belgium and the Netherlands, and thus, comes in a variety of notations across different families, from “Haägen” to “Verhaeghe” to “Van der Haegen” to…

Setting up a sample application

To test this fuzzy searching implementation, create a new LightSwitch application.

Note to self: Application798? Really? Get a life!

In the application, create a new entity called Patient.

As you might expect, the patient entity will contain some common fields like FamilyName, FirstName, and a computed field called FullName, for displaying purposes, which is computed as:

Namespace LightSwitchApplication

    Public Class Patient

        Private Sub FullName_Compute(ByRef result As String)
            ' Set result to the desired field value
            result = Me.FamilyName + ", " + Me.FirstName
        End Sub
    End Class

End Namespace
Making the entities fuzzy

To speed up the fuzzy searching, we won’t loop over the entities during the actual search. Instead, we’ll add an extra property to our Patient entity, which will never be displayed except for this demo, where we store some “fuzzy” version of our entity. When the end-user hits search, we’ll make the search term fuzzy as well, and look for exact matches between the fuzzy version of the search term, and the fuzzy version of our entity.

Add an extra property to the Patient entity called “FuzzyName”. Make sure the maximum length is high enough to contain a fuzzy version (512 characters will do since our FirstName and FamilyName properties are 255 characters each).

This would make a valid candidate to be a computed field, but since computed fields aren’t stored (they are computed on the tier wherever they are called), we’ll “manually” keep this field in sync with the other properties on save (both insert and update), by writing some code (from the Write Code dropdown).

The code we’ll need to add is this:

Imports ExtensionsMadeEasy.Utilities.Extensions.StringExtensions

Namespace LightSwitchApplication
    Public Class ApplicationDataService

        Private Sub Patients_Updating(entity As Patient)
            entity.FuzzyName = entity.FullName.MakeFuzzy()
        End Sub

        Private Sub Patients_Inserting(entity As Patient)
            entity.FuzzyName = entity.FullName.MakeFuzzy()
        End Sub
    End Class

End Namespace

The imports statement at the top (using directive in c#) makes sure you can call an extension method called .MakeFuzzy() on any string.

I added a new screen (Lists and Details Screen template) to show the Patient entities, and in the list I’m showing both the FullName computed property and the FuzzyName property.

Again, this is done for demo purposes only, you’d normally never display this field to the end-user.

What we have done so far, will result in a behavior as in the screenshot below: for each entity, a value is stored that contains the FullName, but without vowels, diacritics, lower case letters, non-alphabetic characters, and with some special attention to how consonants are pronounced (for example: both “Haeghen” and “Haägen” will be stored as “HGN”).

In case you are wondering, the “MakeFuzzy” .Net implementation (source code) is based on this SQL implementation by Diederik Krols. It’s supposedly Dutch specific, but I found it to work for English as well. If you disagree, feel free to export a better algorithm (just export a class from your common project that implements IMetaphonicStringValueConverter), or better yet: send it to me and I’ll gladly include your locale in EME.

However, this doesn’t solve anything yet. If I misspell my name as “hagen”, the search result list is still empty…

Making the search term fuzzy

The last step, is to also grab the search term that is used in the list (or grid) on the screen, and replace it with it’s fuzzy version before it hits the server, so it can be compared to our fuzzy entities…

The bad news is that to do this, you must subscribe to the NotifyPropertyChanged event on the screens IVisualCollection, find the SearchTerms property on the IScreenCollectionPropertyLoader, and swap out the SearchTerms for their fuzzy counterparts, and be careful about threading in the process. Thanks to Justin Anderson, for helping me get access to the “Search Pipeline”.

The good news, is that you can implement this from any screen, in the Screen_Created method (from the “Write Code” dropdown), as a one-liner:

Namespace LightSwitchApplication

    Public Class PatientsListDetail

        Private Sub PatientsListDetail_Created()
            ' Write your code here.
            ExtensionsMadeEasy.Presentation.Screens.ScreenCollectionFuzzySearch.MakeFuzzy(
                Me.Details.Properties.Patients)
        End Sub
    End Class

End Namespace

The result is that whenever the end-user (our dear doctor) hears my name, he/she can enter “Haagen”, “Haägen”, “Haaaaaaeeeeeeeeeghen”, …, our implementation will replace it with “HGN”, and match a whole set of records that sound like “Hagen” (and thus also have “HGN” in their FuzzyName property).

Succes! The name-spelling-era is finally over!

The above is more commonly called soundex searching.

Jan also posted Extensions Made Easy: v1.12: Deep linking entities on 5/8/2012:

imageI’ve been somewhat quiet on my blog lately (but have good excuses, as always), but tonight I’ll try to make it up with a little blogging spree… Part one: what’s new in EME 1.12

Hooray, EME 1.12 is finally released… It’s been a while since I worked on EME because I’ve been so busy. Actually, EME 1.12 was released 2 weeks ago, but didn’t have the time at that point to show the goodies included! A miniature “what’s included”…

Goody nr1: LS 11 (beta) support.

image_thumb1EME has been tested with VS LS 11 and works: commands, shell & theme exporting, the hole bunch. I updated the manifest so you can actually install EME to target VS LS 11 beta! Hooray!

Goody nr2: deep linking on entities.

For those of you that missed it, EME already allows deep linking on screens since a much earlier version. I posted this in a sample (http://code.msdn.microsoft.com/silverlight/NavigationDemo-having-some-f2629c9c), but never gave it much attention.

Deep linking is a technique in Silverlight where a user can interact with your application through the URL. Basically, by passing parameters (the screen name) in the URL, the LightSwitch application opens up and navigates to the correct screen when fully loaded, for example:

LightSwitch doesn’t support deep linking out-of-the box, but if you install & activate Extensions Made Easy, it does! That’s right, the only setup you need to do is to download Extensions Made Easy, and activate it.

Possible navigation commands:

* screens: http://localhost:30325/default.htm?NavigateTo=StudentsListDetail

* entities: http://localhost:30325/default.htm?NavigateToEntity=Students&Id=1

* entities in the non-default datasource (or: you renamed the “ApplicationData”: http://localhost:30325/default.htm?NavigateToEntity=Students&Id=1&DataSourceName=ApplicationData

The funny thing about this “deep linking to entities” gimmick, is that I was porting it to EME from a client specific project, at the very same time Chad77 was asking for this feature in the LightSwitch MSDN forums. Funny, because that almost never happens. Heck, that never happens! Anyways, Chad77 was happy with the result, and I got a free beta tester! Double tap!

Goody nr3: Fuzzy searching

The third gimmick in EME is another functionality that I needed in a client specific LightSwitch project, and happily ported to EME for your convenience: fuzzy searching. This one though, deserves a separate blog post, just because of it’s coolness… :-)


Pawel Kadluczka (@moozzyk) reported EF5 Sample Provider Published in a 5/8/2012 post to the ADO.NET Team blog:

imageWe have published a new version of the sample provider that supports features introduced in the Entity Framework 5. It can be downloaded from the MSDN Code Samples Gallery.

This updated sample takes the sample provider code we released for Entity Framework 4 and demonstrates how to add support for spatial types and how to move to the new version of schema views to be able to reverse-engineer table valued functions (stored procedures with multiple resultsets did not require any changes to the provider). This version does not contain a sample provider for Migrations or Code First which currently use a separate provider model.

The tests (which are now using xUnit) not only test the sample provider but also show how to build queries leveraging features introduced in Entity Framework 5.

The code also contains a Data Designer Extensibility (DDEX) provider sample that now works with Visual Studio 11.

Note: To build and use the sample Visual Studio 11 and .NET Framework 4.5 is needed. xUnit test runner is required to be able to run the tests inside the Visual Studio. See the project page for more details.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Bill Zack (@WilliamHZack) posted Building Cloud-portable and Cloud-burstable .net Applications with Windows Azure and Amazon Web Services to the Slalom Consulting blog on 5/9/2012:

Definitions

imageCloud-burstable applications are those built using an application deployment model in which an application runs in a private cloud or corporate data center and bursts into a public cloud (or clouds) when the demand for computing or storage capacity spikes.

Cloud-portable applications are applications that can take advantage of multiple clouds in order t0 prevent lock-in and/or make the applications more resilient in the face of cloud outages.

Motivation

imageIt should be pretty clear why we want to build applications that are cloud-burstable. It would be a great advantage to be able to overflow our resource requirements into the cloud (or clouds). The advantage of this is that the company only has to pay for for extra compute and storage resources when they are needed.

Cloud-portable applications, on the other hand, make you less vulnerable to cloud outages as both Amazon Web Services and Windows Azure users have experienced recently.

Another benefit of cloud-portability is to remove the fear of cloud vendor lock-in. It is always nice to feel that you can take your business elsewhere even if you never do.

Architecture

Just because we have moved to the cloud does not mean that we should automatically abandon all the good architectural design techniques and design patterns that we have been using successfully in developing on-premise applications. Designing an application that segregates functionality into layers (such as presentation, business logic, and data Access) can go a long way to making an application more portable.

If we examine the typical business application we will probably find that the bulk of the application exists in the business logic layer. In the case of the data access layer, in particular, the differences in the APIs supported by a particular vendor’s offering can be hidden from higher level layers of the application. (Encapsulating an area of an application that is subject to change is a proven architectural technique.)

imageLet’s limit our discussion to on-premise applications running in your data center and the two most popular public clouds; Amazon Web Services and Windows Azure. We also limit our discussion to .NET web applications, however, in principle the same approached should be applicable to other public and private clouds.

imageIMO there are good ways to achieve cloud-bursting between an on-premise data center and the Azure and/or Amazon clouds. The following assumes a well architected application that is built using a three-tier model (and yes I know what “assume” means, but it should be more or less true for most applications).

Now let’s attack the architecture layer by layer.

Presentation Layer
The presentation layer of a .NET web app is primarily an ASP.NET application so, if the application is of the type that was originally (or newly) written to run in a web farm environment with externalized application state then not much is required to make it portable. If it uses SQL Server in its data layer then compatibility is very high anyway. (If not, see below.)

Windows Azure runs ASP.NET applications that are so architected. Some minor encapsulation might be required to support this, but it should be minimal.

Amazon Web Services does too, by virtue of the fact that it is Infrastructure as a Service (IaaS) and fully supports Windows, IIS, .NET, and .NET applications. If it runs on premise then it can be hoisted up onto Amazon Web Services without too much difficulty (ignoring considerations caused by physical separation, such as latency).

Business Layer
If business rules are encapsulated in a separate business layer then this layer should be more-or-less totally platform independent. There may be a need for some encapsulation if the business layer makes any direct API calls to other services. It shouldn’t as a rule. So it should be the most portable of all.

Data Layer
Here is where the major differences between on-premise, Azure, and Amazon exist. Encapsulation can be used to add a level of abstraction between:

  • Blob storage services (AWS Simple Storage Service and Windows Azure Blob Storage)
  • NoSQL storage (AWS Simple DB/DynamoDB and Windows Azure Table Storage)
  • Relational database (AWS Relational Data Service and SQL Azure)

In the above discussion I have not included the on-premise equivalents of storage APIs such as those provided by the Windows file system, SQL Server etc. but the approach should be easily extendable.

Other APIs could be suitably encapsulated and made platform independent where they exist. And techniques like dependency injection and factory patterns could be used to select the appropriate interface modules at execution time based upon configuration or convention.

Conclusion

I realize that this smacks of “SMOP” (Simple Matter of Programming), and that the devil is in the details, but it should be a workable strategy. Of course it all depends on whether we are talking about a greenfield app or one that already exists, and whether it is well-written using a three-tier model or not.


David Linthicum (@DavidLinthicum) asserted “Removing the virtualization layer provides access to the power and performance that many cloud computing consumers seek” in a deck for his Going native: The move to bare-metal cloud services article of 5/8/2012 for InfoWorld’s Cloud Computing blog:

imageI've been saying for some time that virtualization and cloud computing are not mandatory partners. Certainly, virtualization is a tool that makes creating and managing cloud computing services easy. However, more and more, as organizations move to cloud computing, they're asking for the omission of that virtualization layer for better performance and control. Cloud providers are now agreeing to those demands.

imageAs reported last week, the cloud, managed hosting, and colocation service provider Internap is the latest to provide a bare-metal cloud offering. With this technology, customers get automated provisioning of dedicated managed hosting environments, meaning no hypervisor virtualization platform that has performance and functional trade-offs.

Internap is not the first; SoftLayer, Rackspace, Liquid Web, and New Servers (also known as BareMetalCloud.com) also provide access to the bare metal. You can count on more providers to join the fray as cloud computing users continue to demand that their managed hosting environments work like their native environments.

It's a fact that virtualization is not a requirement when creating cloud computing services, but it is helpful to those who manage the service. Indeed, Google is able to provide a multitenant cloud computing platform without virtualization; there are other examples as well.

Many people believe that the virtualization layer provides added protection from users and applications that spin out of control. However, you pay for that protection and management capabilities with added latency and, in many instances, the inability to provision in a timely manner.

Having designed a few cloud computing platforms in my career, my rule is that the more technology you stuff into an architecture, the more issues you have to deal with. The use of virtualization in multitenant platforms is optional, as far as I'm concerned. That means you have to consider that technology for any cloud computing stack on a case-by-case basis, including the trade-offs.

As we become more dependent on cloud-based platforms, most of the more successful cloud computing providers will quickly learn that users are seeking access to platforms that appear to be native. That means removing barriers to get at those raw resources.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Kristian Nese (@KristianNese) reported the availability of his Hyper-V Master Class in a 5/8/2012 post:

imageI have recently developed a "Hyper-V Master Class" course.
This course is available at www.glasspaper.no (just hit this link) and it`s all about Hyper-V 3.0 in Windows Server 2012. It will cover all the details - including architecture of Microsoft`s enterprise hypervisor, all the known features as well as the new improvements that will change the landscape of private cloud architecture.

imageDo you wanna know how to not only survive by using Hyper-V Replica, but also how to make business on this feature? Perhaps you`re interested in how to design HA solutions by using CA File Share over the SMB3 protocol? Or maybe you`re just interested in a robust and reliable guidance on how to setup this in your enterprise.

So please, if you`re interested in this game changer, and want to be the champ within your organization on private cloud computing, feel free to join. It will start late in June, and continue over the summer, so that you can be well prepared towards the release (somewhere in time).


<Return to section navigation list>

Cloud Security and Governance

image_thumbNo significant articles today.


<Return to section navigation list>

Cloud Computing Events

Thomas W. Shindler, MD posted on 5/8/2012 Dr. Tom Shinder Talks Private Cloud at TechEd 2012 in Orlando:

imageWow! Time sure does fly. It seems like last year’s TechEd was only yesterday. Now we’re coming up to the 2012 TechEd season and we’re ramping up some great private cloud architecture content for TechEd in both Orlando and Amsterdam. If you’re interested in private cloud architecture and foundational aspects of a private cloud infrastructure, then we’ve got some great sessions planned for you!

Of course the sessions are going to be great and you’ll learn a ton, but if you come to one or more of my sessions, you could win a book! I’m bringing a box of books and will give them away to attendees. All you need to do is ask a question at the end of the presentation. See you there!

AAP304: Private Cloud Principles, Concepts, and Patterns
Speaker(s): Tom Shinder
Monday, June 11 at 3:00 PM - 4:15 PM
Architecture & Practices | Breakout Session | 300 - Advanced

imageSo you've heard a lot about the Private Cloud—but, what exactly is a Private Cloud? What are the principals, patterns, and concepts that drive a private cloud infrastructure? Attend this session to learn about business value, perception if infinite capacity, predictability, fabric management, fault domains, and many more private cloud architectural components that enable you to realize the true benefits of the private cloud. This is a cornerstone session that you must attend to understand how the private cloud differs from a traditional datacenter and how to architect the private cloud correctly.
Read more

AAP306: Private Cloud Security Architecture: A Solution for Private Cloud Security
Speaker(s): Tom Shinder, Yuri Diogenes
Tuesday, June 12 at 1:30 PM - 2:45 PM
Architecture & Practices | Breakout Session | 300 - Advanced

Cloud computing introduces new opportunities and new challenges. One of those challenges is how security is approached in the private cloud. While private cloud can share a lot of security issues with traditional datacenters, there are a number of key issues that set private cloud security apart from how security is done in the traditional datacenter. In this session, Dr. Tom Shinder and Yuri Diogenes discusses these issues and wrap them in to a comprehensive discussion on private cloud security architecture. By taking an architectural approach to private cloud security, you will be able to understand the critical concepts, principles and patterns that drive a successful security implementation of private cloud.
Read more

WSV320: Understanding and Deploying Hosted Private Cloud: Concepts and Implementation
Speaker(s): Joshua Adams, Tom Shinder, Yuri Diogenes
Wednesday, June 13 at 5:00 PM - 6:15 PM
Windows Server | Breakout Session | 300 – Advanced

The Hosted Private Cloud is a new deployment model that enables an exceptional level of mobility and availability for your private cloud deployments. However, to get the most out of a Hosted Private Cloud solution, you need to understand the core concepts that drive a successful Hosted Private Cloud deployment and then understand what you need to do to implement the solution. In this talk, Dr. Tom Shinder and Yuri Diogenes discuss key Hosted Private Cloud Concepts and then demonstrate critical steps in implementing a hosted Private cloud. Demos show you how to evaluate the Hosted Private Cloud environment and how to configure and validate your Hosted Private Cloud configuration.
Read more

For more information about TechEd North America 2012, head on over to the TechEd North America 2012 Home Page.


Geva Perry (@gevaperry) announced on 5/8/2012 his Cloud Adoption Patterns - Citrix Synergy 2012 Keynote scheduled for 5/10/2012 in San Francisco:

imageThis coming Thursday, May 10, I'll be giving one of the keynote speeches at the Citrix Synergy 2012 conference in San Francisco. My talk is in the morning and comes right after a distinguished speaker: Sameer Dholakia, GM of the Cloud Platforms Group at Citrix.

You can see the description of the two talks (and the one by Citrix CEO Mark Templeton who speaks on Wednesday) on this Featured Speakers page.

imageThe title of my talk is "From the Bottom Up: Patterns of Cloud Adoption". Here's the abstract:

The current pattern of cloud adoption in the enterprise may surprise you. Rather than big, strategic, top-down decisions set by the CIO, cloud computing services – IaaS, PaaS, SaaS – are being adopted primarily through a pattern of bottom-up adoption. Rank-and-file developers, IT administrators and business decision-makers are embracing cloud services and using them as a way to get their jobs done and drive the outcomes expected of them. In this talk, Geva Perry will explore this phenomenon, including its causes and the implications for the enterprise, as well as for vendors.

Regular readers of my blog know I write about this topic a lot (see recently Cloud Computing and SaaS Models Are About Bottom-Up Adoption and this post on the CloudSleuth Blog). In this talk, besides describing the phenomenon, I move a step further to discuss in more detail the implication of this adoption pattern to two groups: enterprise customers and SaaS/cloud vendors.

After the talk there will be a link to the video and slides. If you can't make it to Moscone on Thursday you can watch it live on the web from here: http://live.citrixsynergy.com/sanfrancisco/


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Werner Vogels (@werner) announced Amazon RDS support for SQL Server 2008 R2 as well as .NET and Visual Studio support by Elastic Beanstalk in his Expanding the [Amazon] Cloud for Windows Developers post of 5/8/2012:

imageThe software that powers today’s world of Internet services has become incredibly diverse. Today’s announcement of Amazon RDS for Microsoft SQL Server and .NET support for AWS Elastic Beanstalk marks another important step in our commitment to increase the flexibility for AWS customers to use the choice of operating system, programming language, development tools and database software that meet their application requirements.

imageUsing the AWS Toolkit for Visual Studio, you can now deploy your .NET applications to AWS Elastic Beanstalk directly from your Visual Studio environment without changing any code. You can then off load the management and scaling of your database and application stack to Amazon RDS and AWS Elastic Beanstalk, and focus on adding value to your customers.

Amazon RDS for SQL Server

Managing databases has been a stumbling block for many of our customers, shifting their time away from developing innovative applications to the “muck” of administrative tasks such as OS and database software patching, storage management, and implementing reliable backup and disaster recovery solutions. Amazon RDS manages all these time consuming database administration tasks including patch management, striping the storage for better performance, and database and log backups for disaster recovery, enabling developers to focus more on their applications.

Since we launched Amazon RDS for MySQL in October 2009, it has become one of the most popular services on AWS, with customers such as Intuit using the service to keep up with the steep increase in traffic during the tax season. We introduced Amazon RDS for Oracle last year, and based on the demand from our Windows customers, are introducing Amazon RDS for SQL Server today. Amazon RDS currently supports SQL Server 2008 R2 and plans to add support for SQL Server 2012 later this year.

Depending on your requirements, you can choose from four different SQL Server Editions: Express, Web, Standard and Enterprise to run on Amazon RDS. If you are a new Amazon RDS customer, you can get started with Amazon RDS for SQL Server with a Free Usage Tier, which includes 750 hours per month of Amazon RDS micro instances with SQL Server Express Edition, 20GB of database storage and 10 million I/O requests per month.

After the Free Usage Tier, you can run Amazon RDS for SQL Server under two different licensing models - "License Included" and Microsoft License Mobility. Under the License Included service model, you do not need to purchase SQL Server software licenses. “License Included” pricing starts at $0.035/hour and is inclusive of SQL Server software, hardware, and Amazon RDS management capabilities. The Microsoft License Mobility program allows customers who already own SQL Server licenses to run SQL Server deployments on Amazon RDS. This benefit is available to Microsoft Volume Licensing customers with SQL Server licenses covered by active Microsoft Software Assurance contracts. The Microsoft License Mobility program is suited for customers who prefer to use existing SQL Server licenses or purchase new licenses directly from Microsoft.

.NET support for Elastic Beanstalk

In our effort to let a thousand platforms bloom on AWS, I am excited to introduce .NET support for AWS Elastic Beanstalk. Elastic Beanstalk gives developers an easy way to quickly build and manage their Java, PHP and as of today, their .NET applications in the AWS cloud. As discussed here, Elastic Beanstalk targets both application developers by providing a simple set of tools to get started with development quickly and the platform developers by giving control over the underlying technology stack. Developers simply upload their application and Elastic Beanstalk automatically creates the AWS resources and application stack needed to run the application, freeing developers from worrying about server capacity, load balancing, scaling their application, and version control.

Using the AWS Toolkit for Visual Studio, developers can now deploy their .NET applications directly to Elastic Beanstalk, without leaving their development environment. The incremental deployment capabilities allow for quick development and testing cycles by only uploading modified files. Within seconds, new application versions get updated on a set of Amazon EC2 instances.

To get started with Amazon RDS for SQL Server and AWS Elastic Beanstalk, visit http://aws.amazon.com/rds/sqlserver and http://aws.amazon.com/elasticbeanstalk. For a hands-on demo on how to deploy .NET applications on Elastic Beanstalk with Amazon RDS for SQL Server, visit the AWS Elastic Beanstalk Developer Guide.

This bodes more serious competition between Windows Azure and AWS in 2012 and beyond.


Adron Hall (@adron) posted I Can Talk About It Finally! => Tier 3 Web Fabric Platform as a Service (PaaS) on 5/8/2012 (also see below post):

imageA couple months ago I shifted gears and started working for Tier 3 on a number of projects. I made this decision for a few reasons:

1. I’m a huge advocate of PaaS (Platform as a Service) technologies. I like what PaaS enables and what it eliminates. Matter of fact I’d say I’m a bull on the technology. I like to learn about, create and build the architectures within platforms. I also love the rather complex back end problems that come up when building a truly powerful, scalable, high end, highly available PaaS. You say, “Adron, Tier 3 doesn’t have any PaaS stuff, it’s an IaaS Provider, this doesn’t explain anything?” Aha! Read on (unless of course you’ve caught the news today… then you already know the answer)

2. I’m a polyglot dev. .NET kind of burned me out a few years back and I dedicated to learning as many other frameworks, languages, and tech stacks that I could. I’ve never been happier with the variety these days. I’ll admit though I still love to use all those years of experience I have with .NET. Indeed, I have a little soft spot in my heart for C#. Tier 3, along with the Iron Foundry Project, has given me the opportunity to work across languages and stacks including Node.js, Ruby, Objective-C and more.

3. I like to build things, advocate for those things and what they can do for you, for dev teams, and in the end what we developers can build with them. Sometimes this might mean I do it myself, sometimes it means coordinating and leading a team (or as I often say of leads, “serving” the team). Right now I’m getting to do a little bit of both and it is indeed fun and really exciting! This brings me to the answer.

The Answer: Tier 3 now has one of the, if not the most advanced PaaS Environment available today. Yeah, you can quote me on that. I’m not saying it because I work at Tier 3, I’m saying it because I decided to come work at Tier 3 to help build it. Those of you that know me, know why and where I do things. I have intent behind these decisions. ;)

The Tier 3 PaaS environment officially has more support for frameworks than any other PaaS Provider out there today. Congratulations to the team for getting this out the door! Needless to say, I’m proud to be a part of this team of bad ass devs! Cheers!

What is the Tier 3 Web Fabric?

Here’s a short tour I put together…

What exactly makes up a Web Fabric? We’ve taken Coud Foundry as a core, adding Iron Foundry for full support of all major Enterprise Frameworks and added a fabric over these services to provide an automated seem-less creation of a complete PaaS Environment.

How would you use a PaaS like this?

In an enterprise software and application development shop there is often a break out between development, testing, maybe a UAT (User Acceptance Testing) and finally production. One way to utilize such capabilities is to built a Web Fabric for each of these environments. Once each environment is built, these can then be scaled up or down as needed. Once the environment is done simply delete it. For an environment like UAT or Test, this is one of the most ideal situations to create an environment from scratch, ensuring that outliers don’t affect the testing criteria. How do you build a Tier 3 Web Fabric PaaS? This is the fun part. This process involves a little information and a few clicks, which then will build an entire PaaS environment.

Step 1: In the Tier 3 Control Panel click on the tab titled “Fabrics“. Inside that view, click on “Create Web Fabric“.

Tier 3 Control Panel

Tier 3 Control Panel

Step 2: Fill out the information requested on the screen. The user that you’re creating will be your Tier 3 Web Fabric Administrator. The name becomes part of your URI to access the PaaS API from, and the friendly name below that displays as a description in the control panel. The last piece of information is public or private, the private option limiting access to only VPN users of your Tier 3 Account.

Creating a New Web Fabric

Creating a New Web Fabric

Step 3: Now give it some time. Remember this is not merely a simple virtualized instance of an operating system. What is now happening is a Cloud Foundry environment is being built, Iron Foundry is also added & other enhancements are being applied and built. This then creates an entire Tier 3 Web Fabric that can be used with any of the following tools, languages, and databases.

A few of the languages and frameworks…

  • Ruby on Rails or Sinatra
  • ASP.NET w/ whichever .NET Language, it could be C#, VB.NET, or .NET COBOL if you so felt inclined to build a web application with it.
  • Java w/ Spring and other options.
  • Node.js Nuff’ Said
  • Python

Of course the database services too…

  • MongoDB
  • MS SQL Server
  • vmWare PostGreSQL
  • Redis

These are just a few that are and will be supported in the coming days. The Cloud Foundry base provides a massively powerful core to build off of and extend services and frameworks.

For pushing applications to the Tier 3 Web Fabric, here are some tools to help with that…

vmc-IronFoundry :: This is the same thing as the vmc CLI that is part of the Cloud Foundry Project except that it adds support for .NET pushes from the command line too.

vmc :: this is the default way used by most people working with Cloud Foundry based PaaS Environments.

Eclipse & STS for Java :: this is the extension that integrates into Eclipse.

Cloud Foundry Explorer :: this can be used to view and push .NET applications to the Tier 3 Web Fabric (or any Iron Foundry enabled Cloud Foundry Environment)

Open Source Software, Iron Foundry and More…

In the coming days, weeks, and months I’ll be working with the team here at Tier 3 to drive more capabilities and features. In addition I’ll also be driving the Iron Foundry Open Source effort, pushing to extend what we’ve provided already with the .NET support extension on Cloud Foundry and also more. We here at Tier 3 love the open source community, and we love being part of the community. So with this announcement I wanted to add a big, huge, awesome THANKS to everyone out there passionately involved in and building software that is open source. You all ROCK!

Stay tuned, this is merely the beginning.


Barb Darrow (@gigabarb) reported Cloud startup Tier 3 gets serious about enterprise PaaS in a 5/8/2012 post to GigaOm’s Structure Blog:

imageUp-and-coming cloud provider Tier 3 is getting into the platform-as-a-service space with a new offering based upon VMware’s open source Cloud Foundry project, as well as a plethora of cloud database options.

The new PaaS, called Web Fabric, is built on Tier3′s previously announced Iron Foundry implementation of Cloud Foundry that adds support for the Microsoft .NET framework. VMware itself, which is locked in a death match with Microsoft, is not disposed to support .NET, although Cloud Foundry natively supports multiple languages and frameworks, including Java, Ruby, PHP and Python.

imageTier 3 also unveiled a suite of database services, called Data Fabric, that will give customers the option of running MongoDB, Redis, SQL Server, MySQL or PostGres databases that can connect to Web Fabric, Tier 3′s flagship infrastructure-as-a-service servers or even external applications. Those services should be live June 1, the company said.

A company spokeswoman said Tier 3′s enterprise customers will be able to access and manage their IaaS,PaaS and database services from a single console — an attractive proposition.

This news should be of interest to companies that want to experiment with PaaS and database services in an enterprise-class cloud, which is how Tier 3 positions its services. It also shows that the world isn’t standing still waiting for other IaaS players to get their acts together.

Rackspace is barely getting their OpenStack IaaS up and running and here’s an IaaS running vCloud for more than a year and now offering PaaS,” said GigaOM Pro analyst Jo Maitland.

The 451 Group’s Tier1 researchers are also impressed. While acknowledging it’s still way early in the game for PaaS adoption, they wrote:

From IaaS to PaaS, Tier 3 is able to deliver a consistent enterprise cloud strategy with high availability, security and interoperability in mind. Whether it can transition enterprise IT managers, which are likely involved in early testing and development-type ad hoc cloud projects, to becoming PaaS customers in large numbers remains to be seen, but its offer of enterprise production support, platform automation and integration seems to match the demands of business organizations as they come to the end of the technology lifecycle.

Tier 3 has been busily expanding its business. In February, it announced a federated cloud platform that lets service providers deploy white-label clouds based on Tier3′s software and leverage the geographic footprint of other Tier 3 partners. That means, for example, that a service provider in Indiana could offer customers access to resources anywhere in the world where another Tier 3-based service provider is operating.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

Full disclosure: I’m a registered GigaOm analyst.


Julie Bort (@Julie188) asserted Amazon Is Trying To Steal Microsoft's Hardcore Customers Away From Microsoft's Cloud in a 5/8/2012 third-party article for Business Insider (via Yahoo! Finance:

imageAmazon is trying to lure Microsoft customers away from its cloud, Azure.

Today Amazon announced a new service that will let companies use Microsoft's database and its Web programming platform, ASP.NET, on Amazon's cloud.

imageCustomers who use those technologies are natural for Microsoft's cloud.

imageMany people think that Amazon and Azure are competing in different parts of the cloud market. Amazon made its name with Infrastructure as a Service. You can fire up Windows servers on its cloud but you have to manage them.

With Azure, you don't do that. You just bring the applications themselves. But you are limited to applications that work on Microsoft's technology (plus a few popular open source software development tools). That's known as Platform as a Service (Paas).

However Amazon has Elastic Beanstalk, a PaaS that competes directly with Azure.

AND there's been talk that Microsoft is going to do more IaaS stuff and even (gasp!) make Linux available on Azure. That would put it head-to-head with Amazon.

So the two are duking it out. Microsoft has stolen some great ideas from Amazon. Shortly after Amazon announced its search service, Microsoft offered a different kind of search service for Azure based on Bing. Amazon and Microsoft have been having a price war lately, too.


<Return to section navigation list>

0 comments: