Tuesday, January 18, 2011

Windows Azure and Cloud Computing Posts for 1/18/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles yet today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Rob Tiffany (@robtiffany) announced SQL Server Compact 4.0 Lands on the Web on 1/17/2011:

image A decade has passed since I first started using SQL CE on my Compaq iPAQ.  What started as a great upgrade to Pocket Access turned into the ultimate embedded database for Windows CE, the Pocket PC, Windows Mobile and Windows Phones.  The one-two punch of Outlook Mobile synchronizing email with Exchange and SQL Server Compact synchronizing data with SQL Server helped set the mobile enterprise on fire.  In 2005, version 3.0 supported Windows Tablets and progressive enhancements to the code base led to full Windows support on both x86 and x64 platforms.  With the new version 4.0, the little-database-that-could has grown up into a powerful server database ready to take on the web.

sqlserver sql server 2008 logo 300x246 SQL Server Compact 4.0 Lands on the WebWe’ve come a long way and you’re probably wondering what qualifies this new embedded database to take on the Internet:

  • Native support for x64 Windows Servers
  • Virtual memory usage has been optimized to ensure the database can support up to 256 open connections – (Are you actually using 256 pooled connections with your “Big” database today?)
  • Supports databases up to 4 GB in size – (Feel free to implement your own data sharding scheme)
  • Developed, stress-tested, and tuned to support ASP.NET web applications
  • Avoids the interprocess communications performance hit by running in-process with your web application
  • Row-level locking to boost concurrency
  • Step up to Government + Military grade security SHA2 algorithm to secure data with FIPS compliance
  • Enhanced data reliability via true atomicity, consistency, isolation, and durability (ACID) support
  • Transaction support to commit and roll back grouped changes
  • Full referential integrity with cascading deletes and updates
  • Support ADO.NET Entity Framework 4 – (Do I hear WCF Data Services?)
  • Paging queries are supported via T-SQL syntax to only return the data you actually need

Wow, that’s quite a list!  SQL Server Compact 4.0 databases are easily developed using the new WebMatrix IDE or through Visual Studio 2010 SP1.  I’m loving the new ASP.NET Web Pages.  It reminds me of the good old days of building web applications with Classic ASP back in the 90′s with Visual InterDev and Homesite.

What about Mobility?

Since SQL Server Compact owes its heritage to mobile and embedded versions of Windows, you might be wanting to know what our story is there.  The good news is that you can build and deploy v4.0 databases on Windows XP, Windows Vista, and Windows 7.  If you want to implement an occasionally-connected solution that utilizes the Sync Framework, Remote Data Access (RDA), or Merge Replication, you’ll need to stick with SQL Server Compact 3.5 SP2.  Time and resource-constraints prevented the Compact team from enabling these features. 

Luckily, single-user WPF/WinForms database applications running on Windows Slates, laptops and Windows Embedded Handheld devices will work just fine with the v3.5 SP2 runtime.  Get a jumpstart with this by pickup up “Enterprise Data Synchronization with Microsoft SQL Server 2008 and SQL Server Compact 3.5 Mobile Merge Replication” at   http://www.amazon.com/Enterprise-Synchronization-Microsoft-Compact-Replication/dp/0979891213/ref=sr_1_1?s=books&ie=UTF8&qid=1281715114&sr=1-1 to start building those MEAP solutions.

imageWith the tidal wave of Windows Slates hitting the market, a secure, powerful mobile database that allows users to work offline and syncs with SQL Server is definitely going to be a hot item!

So run, don’t walk to the Microsoft Download site to download the Next-Gen database for the web:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=033cfb76-5382-44fb-bc7e-b3c8174832e2

If you need to support occasionally-connected mobile applications with sync capabilities on multiple Windows platforms, download SQL Server Compact 3.5 SP2:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e497988a-c93a-404c-b161-3a0b323dce24

Keep Syncing.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Phani Raju promised on 1/18/2011 to deliver an OData Browser for Windows Phone 7 by 1/22/2011:

Since the company got me a Windows Phone 7 device and also paid for my
developer account, I feel obligated to put an application for the market place out there.

imageHence. we have the OData browser for Windows Phone 7, which I am currently testing with a couple of brave beta testers and will be uploading this app to the market place at the End of this week, i.e. the 22nd of January 2011.

Here are some screens :

1. The screen to select a collection given an OData service Uri, here you see Netflix’s collections

Service_CollectionsList

2. The collection view to show you the first page of results.
Based on the make-up of your entity types, we figure out which data template to use in this view.

Since the “Titles” entity set has a Media stream backing it and has EPM annotations to atom:title, we’re able to pick up this nice view which is not hard-coded for any specific OData service .

CollectionWithMLEView

2.a, Here we see the same view but showing a collection that has no EPM annotations.

Since the app can’t figure out which field makes sense, it allows the user to pick which field they want to see in the view.

CustomizableViewForEntities

CustomizableViewForEntities_2

The two above captures have a decidedly Northwind look to me.

3. Selecting a row in the 2/2.a gives you this screen that contains the Columns in the entity instance in a flat format.


You can see that this view shows the Id, Synopsis and other properties of an instance from the “Titles” set.

ColumnViewForSelectedEntity

If we find that the <atom:entry> element has a Media stream of an image type behind it, we show the media content in the Media Pivot item.

MediaView

If you have EPM mappings to GeoRss elements or have certain fields that denote the location of a place on the map, we show you a pushpin on the map as shown below.

MapViewIfDataContainsLocation

If you can’t wait for the app to be on the marketplace and are willing to undergo some pain as I iron out the issues in the app, send me an email at PHANIRAJ AT MICROSOFT DOT COM, and I can send you the xap for the application for you to self-host this application on your developer-unlocked phones.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

image722322No significant articles yet today.

 


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles yet today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Jeff Nuckolls explained how to quickly Migrate existing applications and databases to Azure (step-by-step) in this 1/18/2011 post with video segments:

Here is a quick/simple 5-step migration of an existing ASP.NET web application and SQL Server database to Windows Azure “Web Roles” and SQL Azure.

It’s intended to be a very simple demonstration to convey the concept and tools used.  It does NOT include all features of Azure or various enterprise scenarios such as large database partitioning/scaling, service bus integration, authentication, Worker Roles for background processing, or the multiple storage options available in Azure.

Step 1.  Setup your environment, download/install the required tools, and create your Azure Subscriptions.

Step 2.  Create a simple ASP.NET web application and SQL Server database using Visual Studio 2010.

Step 3.  SQL Server migration to SQL Azure.

Step 4.  ASP.NET (on-premise) to SQL Azure hybrid integration.

Step 5.  Migrate ASP.NET application Windows Azure.

Hope this was helpful… future screen casts will delve into further details in various areas.


David Chou (pictured below) posted Run Java with GlassFish in Windows Azure on 1/18/2011:

image At PDC10 (Microsoft’s Professional Developers Conference 2010), Microsoft has again provided affirmation of support for Java in Windows Azure. “We're making Java a first-class citizen with Windows Azure, the choice of the underlying framework, the choice of the development tool.”, said Bob Muglia (President of Server and Tools at Microsoft), during his keynote presentation (transcript). Then during PDC Vijay Rajagopalan delivered a session (Open in the Cloud: Windows Azure and Java) which provided more details on the state of many deliverables, including:

imageVijay also talked about, during his presentation, a successful deployment of Fujitsu’s Interstage application server (a Java EE 6 app server based on GlassFish) in Windows Azure. Plus a whole slew of base platform improvements announced via the Windows Azure team blog, which helped to address many of the limitations we observed last year, such as not being able to use NIO as described in my earlier work with Jetty and Windows Azure.

Lots of great news, and I was finally able to sit down and try some things hands-on, with the latest release of the Windows Azure SDK (version 1.3; November 2010) that included many of the announced improvements.

Java NIO now works!

First off, one major limitation identified previously was that because of the networking sandbox model in Windows Azure (for security reasons) also blocked the loopback adapter which NIO needed. At PDC this was discussed, and the fact that Fujitsu Interstage app server worked (which used GlassFish which used NIO) proved this works. And fortunately, there isn’t anything additional we need to do to “enable” NIO; it just works now. I tried my simple Jetty Azure project by changing it back to using the org.eclipse.jetty.server.nio.SelectChannelConnector, deployed into Windows Azure, and it ran!

Also worth noting was that the startup time in Windows Azure was significantly improved. My Jetty Azure project took just a few minutes to become accessible on the Internet (it had taken more than 20 minutes at one point in time).

Mario Kosmiskas also posted an excellent article which showed Jetty and NIO working in Windows Azure (and many great tips which I leveraged for the work below).

Deploying GlassFish

Since Fujitsu Interstage (based on GlassFish) already works in Azure, GlassFish itself should work as well. So I thought I’d give it a try and learn from the exercise. First I tried to build on the Jetty work, but started running into issues with needing Visual Studio and the Azure tools to copy/move/package large amounts of files and nested folders when everything is placed inside of the project structure – GlassFish itself has 5000+ files and 1100+ folders (the resulting super-long file path names for a few files caused the issue). This became a good reason to try out the approach of loading application assets into role instances from Blob storage service, instead of packaging everything together inside of the same Visual Studio project (as is the best practice for ASP.NET Web Role projects).

This technique was inspired by Steve Marx’s article last year (role instance using Blob service), and realized using the work from Mario Kosmiskas’ article (PowerShell magic), I was able to have an instance of GlassFish Server Open Source Edition 3.1 (build 37; latest at the time of this writing) deployed and running in Windows Azure, in a matter of minutes. Below is a screenshot of the GlassFish Administration Console running on a cloudapp.net subdomain (in Azure).

admin-console

To do this, basically just follow the detailed steps in Mario Kosmiskas’ article. I will highlight the differences here, plus a few bits that weren’t mentioned in the article. Easiest way is to just reuse his Visual Studio project (MinimalJavaWorkerRole.zip). I started from scratch so that I could use GlassFish for various names.

1. Create a new Cloud project, and add a Worker Role (I named mine GlassFishService). The project will open with a base project structure.

2. Copy and paste in the files (from Mario’s project, under the ‘JettyWorkerRole’ folder, and pay attention to his comments on Visual Studio’s default UTF-8 enocding):

  • lib\ICSharpCode.SharpZipLib.dll
  • Launch.ps1
  • Run.cmd

Paste them into Worker Role. For me, the resulting view in Solution Explorer is below:

image

3. Open ServiceDefinition.cscfg, and add the Startup and Endpoints information. The resulting file should look something like this:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="GlassFishAzure" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="GlassFishService">
    <Imports>
      <Import moduleName="Diagnostics" />
    </Imports>
    <Startup>
      <Task commandLine="Run.cmd" executionContext="limited" taskType="background" />
    </Startup>
    <Endpoints>
      <InputEndpoint name="Http_Listener_1" protocol="tcp" port="8080" localPort="8080" />
      <InputEndpoint name="Http_Listener_2" protocol="tcp" port="8181" localPort="8181" />
      <InputEndpoint name="Http_Listener_3" protocol="tcp" port="4848" localPort="4848" />
      <InputEndpoint name="JMX_Connector_Port" protocol="tcp" port="8686" localPort="8686" />
      <InputEndpoint name="Remote_Debug_Port" protocol="tcp" port="9009" localPort="9009" />
    </Endpoints>
  </WorkerRole>
</ServiceDefinition>

As you can see, the Startup element instructs Windows Azure to execute Run.cmd as a startup task. And the InputEndpoint elements are used to specify ports that GlassFish server needs to listen to for external connections.

4. Open Launch.ps1 and make a few edits. I kept the existing functions unchanged. The parts that changed are shown below (and marked red):

$connection_string = 'DefaultEndpointsProtocol=http;AccountName=dachou1;AccountKey=<my access key>
# JRE
$jre = 'jre-1.6.0_23.zip'
download_from_storage 'java' $jre $connection_string (Get-Location).Path
unzip ((Get-Location).Path + "\" + $jre) (Get-Location).Path

# GlassFish
$glassfish = 'glassfish-3.1-b37.zip'
download_from_storage 'apps' $glassfish $connection_string (Get-Location).Path
unzip ((Get-Location).Path + "\" + $glassfish) (Get-Location).Path

# Launch GlassFish
.\jre\bin\java `-jar .\glassfish3\glassfish\modules\admin-cli.jar start-domain --verbose

Essentially, update the script with:

  • Account name and access key from your Windows Azure Storage Blob service (where you upload the JRE and GlassFish zip files into)
  • Actual file names you’re using
  • The appropriate command that references eventual file locations to launch the application (or call another script)

The script above extracted both zip files into the same location (at the time of this writing, in Windows Azure, it is the local storage at E:\approot). So your commands need to reflect the appropriate file and directory structure based on how you put together the zip files.

5. Upload the zip files into Windows Azure Storage Blob service. I followed the same conventions of placing them into ‘apps’ and ‘java’ containers. For GlassFish, I used the glassfish-3.1-b37.zip downloaded directly from glassfish.java.net. For Java Runtime (JRE) I zipped the ‘jre’ directory under the SDK I installed. To do the upload, many existing tools can help you do this from a user’s perspective. I used Neudesic’s Azure Storage Explorer. As a result, Server Explorer in Visual Studio showed this view in Windows Azure Storage (and the containers would show the uploaded files):

image

That is it! Now we can upload this project into Windows Azure and have it deployed. Just publish it and let Visual Studio and Windows Azure do the rest:

image

Once completed, you could go to port 8080 on the Website URL to access GlassFish, which should show something like this:

image

If you’d like to use the default port 80 for HTTP, just go back to ServiceDefinition.cscfg and update the one line into:

<InputEndpoint name="Http_Listener_1" protocol="tcp" port="80" localPort="8080" />

GlassFish itself will still listen on port 8080, but externally the Windows Azure cloud environment will receive user requests on port 80, and route them to the VM GlassFish is running in via port 8080.

Granted, this is a very simplified scenario, and it only demonstrates that the GlassFish server can run in Windows Azure. There is still a lot of work that are needed to enable functionalities Java applications expect from a server, such as systems administration and management components, integration points with other systems, logging, relational database and resource pooling, inter-node communication, etc.

In addition, some environmental differences such as Windows Azure being a stateless environment, non-persistent local file system, etc. also need to be mitigated. These differences make Windows Azure a little different from existing Windows Server environments, thus there are different things we can do with Windows Azure instead of using it simply as an outsourced hosting environment. My earlier post based on the JavaOne presentation goes into a bit more detail around how highly scalable applications can be architected differently in Windows Azure.

Java deployment options for Windows Azure

With Windows Azure SDK 1.3, we now have a few approaches we can pursue to deploy Java applications in Windows Azure. A high-level overview based on my interpretations:

Worker Role using Visual Studio deployment package – This is essentially the approach outlined in my earlier work with Jetty and Windows Azure. With this approach, all of the files and assets (JRE, Jetty distributable, applications, etc.) are included in the Visual Studio project, then uploaded into Windows Azure in a single deployment package. To kick-off the Java process we can write bootstrapping code in the Worker Role’s entry point class on the C# side, launch scripts, or leverage SDK 1.3’s new Startup Tasks feature to launch scripts and/or executables.

Worker Role using Windows Azure Storage Blob service – This is the approach outlined in Mario Kosmiskas’ article. With this approach, all of the files and assets (JRE, Jetty distributable, applications, etc.) are uploaded and managed in the Windows Azure Storage Blob service separately, independent of the Visual Studio project that defines the roles and services configurations. To kick-off the Java process we could again leverage SDK 1.3’s new Startup Tasks feature to launch scripts and/or executables. Technically we can invoke the script from the role entry class in C# too, which is what the initial version of the Tomcat Accelerator does, but Java teams may prefer an existing hook to call the script.

Windows Azure VM Role – The new VM Role feature could be leveraged, so that we can build a Windows Server-based image with all of the application files and assets installed and configured, and upload that image into Windows Azure for deployment. But note that while this may be perceived as the approach that most closely aligns to how Java applications are deployed today (by directly working with the server OS and file system), in Windows Azure this means trading off the benefits of automated OS management and other cloud fabric features.

And perhaps, with the new Remote Desktop feature in Windows Azure, we can probably manually install and configure Java application assets. But doing so sort of treats Windows Azure as a simple hosting facility (which it isn’t) and defeats the purpose of all the fault-tolerance, automated provisioning and management capabilities in Windows Azure. In addition, for larger deployments (many VM’s) it would become increasingly tedious and error-prone if each VM needs to be set up manually.

In my opinion, the Worker Role using Windows Azure Storage Blob service approach is ideally suited for Java applications, for a couple of reasons:

  • All of the development and application testing work can still be accomplished in the tools you’re already using. Visual Studio and dealing with Windows Azure SDK and tools are only needed from a deployment perspective – deploying the script that launches the Java process
  • Managing individual zip files (or other archive formats) and at any granularity level, instead of needing to put everything into the same deployment package when using Visual Studio for everything
  • Loading application assets from the Windows Azure Storage Blob service also provides more flexibility for versioning, reuse (of components), and managing smaller units of changes. We can have a whole set of scripts that load different combinations of assets depending on version and/or intended functionality

Perhaps, at some point we could just upload scripts into Blob service and the only thing is to tell the Windows Azure management portal to run, as the Startup Task, which scripts for which roles and instances from the assets we store in the Blob service. But there are still things we could do with Visual Studio – setting up IntelliTrace, facilitating certificates for Remote Desktop, etc. However, treating the Blob service as a distributed storage system, automating the transfer of specific components and assets from the storage to each role instance, and launching the Java process to run the application, could be a very interesting way for Java teams to run applications in Windows Azure platform.

Lastly, this approach is not limited to Java applications. In fact, as long as any software components that can be loaded from Blob storage, then installed in an automated manner driven by scripts (and we may even be able to use another layer of script interpreters such as Cygwin), they can use this approach for deployment in Windows Azure. By managing libraries, files, and assets in Blob storage, and maintaining a set of scripts, a team can operate and manage multiple (and multiple versions of) applications in Windows Azure, with comparatively higher flexibility and agility than managing individual virtual machine images (especially for smaller units of changes).


Kenneth van Sarksum reported Release: Microsoft MAP 5.5 on 1/18/2010:

image After releasing a public beta in November last year, Microsoft has now released the Microsoft Assessment and Planning (MAP) Toolkit version 5.5. MAP provides agentless discovery, inventory and assessment for a variety of scenarios, now supporting assessment for migration to the Windows Azure and SQL Azure platform.

imageThe Windows Azure assessment inventories web applications and SQL Server database instances in the environment and reports the information you need to plan the migration of these on-premises workloads to the Windows Azure Services Platform and Microsoft SQL Azure Database.

This version provides the following new features:

  • Assessment for migration to Windows Azure and SQL Azure [Emphasis added.]
  • MySQL, Oracle and Sybase database discovery for SQL Server migration projects
  • Server consolidation assessment for Hyper-V
  • Internet Explorer 8 and Windows 7 upgrade assessment


Alex Williams listed Factors That Make Microsoft's Cloud-Based CRM Service Worth Considering in a 1/18/2011 post to the ReadWriteCloud blog:

A few factors make Microsft Dynamics CRM Online a player in the social CRM space that deserves serious notice.

image Microsoft Dynamics CRM Online is available in 40 languages. Regional data center development provides a universal experience for the user, be they in the United States, Asia or the Middle East. And it provides a price difference that makes Oracle look exorbitant and Salesforce.com quite pricey.

msoftdynamics.jpgMicrosoft Dynamics CRM Online fully integrates with Microsoft Office. That provides a level of content management to integrate with social components such as an activity stream.

These combined factors put the service square in the middle of the market for CRM offerings.

image Here's a bit more about what's available with the new Microsoft Dynamics CRM Online:

  • The service went through a four month beta test with 11,500 customers around the world.
  • The company built out data centers in North America, the Middle East and the Asia Pacific regions for the launch.
  • Microsoft Dynamics online is available at a monthly rate for $34 per person. That compares to Oracle and Salesforce.com, which charge $150 per person and $125 per month respectively.
  • Microsoft is aggressively promoting Dynamics. It will provide $200 per user, which can be applied to services such as migrating data or customizing the solution to meet unique business needs.
  • Microsoft Dynamics CRM Online has a Microsoft Office interface, providing an experience that people are accustomed to using.
  • For developers, configuration and customization with full .NET development in Windows Azure that provides integration with business applications. [Emphasis added.]

imagePaul Greenberg is bullish about Windows Azure. But the authoritative CRM expert has a few overall reservations about Microsoft Dynamics CRM:

Most germane here, Dynamics CRM 2011, announced for 40 markets by year end in July 2010, is a solid CRM competitor in the market when it comes to traditional SFA and customer service functionality. However, it suffers when it comes to its integration of social channels, despite its proclamation of social "connectors," which are used to integrate Facebook, LinkedIn and other external social feeds. Frankly, by comparison to its competitors, not all that much to show. In fact, competitive feed integrators like Gist do a better job than what I've seen, and provide more value. But what can't be argued is its familiarity through the use of an Outlook user interface, or the solidity of its traditional functionality (minus of course marketing, which is as poor as many of their competitors, though companies like Oracle - see above - are starting to incorporate marketing into the suite quasi-effectively, at least).

It's sometimes easy to think that Salesforce.com has a lock on the social CRM space. This latest offering shows that Microsoft's might and productivity offerings should not be overlooked even though in some respects it has obvious shortcomings.

Here’s what Paul Greenberg has to say about Windows Azure and Dynamics CRM in his Finally! The CRM 2011 Watchlist Part I post to ZDNet’s Social CRM: The Conversation blog of 12/28/2010:

Microsoft

image This one is perhaps the most enigmatic choice and the most surprising that I’ve made. Not that Microsoft doesn’t belong on the Watchlist for 2011. They most decidedly do. But because they faded in a few areas, they actually didn’t make my threshold but still are powerful enough to belong on this list unequivocally, which shows, of course, that there is a lot of subjectivity to this list. However, not reaching the threshold points out that they have some serious holes to fill in 2011, but if they do, watch out, because they have some strengths that are clearly there but fuzzily defined by their own efforts.

imageSo what are those strengths? What do they have to do to show that I did the right thing - though that’s needless to say, not exactly a reason to do anything about what I’m going to point out. Their strengths lie not in their just in their CRM product offering Dynamics CRM 2011 - which is good except in the area of of course, marketing. Their strength lies in Microsoft Azure, it lies in their move to the cloud as their primary way of delivering their applications over the next few years; it lies in their, gasp, shockeroo, wow, ulp, whoa, mobile operating system, Windows Phone 7 - a mobile OS that they finally seem to have gotten right And, if done right, it lies with their Dynamics product line, led by CRM, which they’ve finally seem to have understood internally. All of this combined can lead them to a good place in 2011 and an improved position in the marketplace that they’ve struggled with for the last two or three years, including when it comes to it, CRM. [Emphasis added.]

imageFor example, Microsoft Azure is a truly competitive cloud infrastructure that has all the components needed to become a competitor to Amazon, unlike the rest of the industry who are more pretenders to the cloud provider throne. To punctuate the point, eBay became a customer of Azure in July 2010.  Azure is, as far as I’m concerned, Microsoft’s ace in the hole, when it comes to regaining market position in 2011 and beyond. [Emphasis added.]

But they don’t stop there. Aside from getting Windows 7 right (for a change), they got Windows Phone 7 mostly right too. Windows Phone 7 is not just a huge technical upgrade over all prior Windows mobile operating systems but its user interface reflects the communications paradigm shift that supports the current social consumer’s behaviors.  It might be the best organized interface of all the mobile systems (sorry iPhone lovers, Android folks - I use both of them but Windows Phone 7’s appeal is undeniable). It lays a foundation for the mobile CRM apps that are critical the future of Social CRM.

Most germane here, Dynamics CRM 2011, announced for 40 markets by year end in July 2010,  is a solid CRM competitor in the market when it comes to traditional SFA and customer service functionality.  However, it suffers when it comes to its integration of social channels, despite its proclamation of  social “connectors,” which are used to integrate Facebook, LinkedIn and other external social feeds. Frankly, by comparison to its competitors, not all that much to show. In fact, competitive feed integrators like Gist do a better job than what I’ve seen, and provide more value. But what can’t be argued is its familiarity through the use of an Outlook user interface, or the solidity of its traditional functionality (minus of course marketing, which is as poor as many of their competitors, though companies like Oracle - see above - are starting to incorporate marketing into the suite quasi-effectively, at least).

Despite this competence, the Dynamics platform seems to generate, all in all, an ambivalent buzz in the marketplace. On the one hand I heard rumors that Microsoft was trying to sell off Dynamics to Accenture, which are untrue as far as I can tell. On the other hand, in reality,  its become a core part of Microsoft’s revenue - and is being seen as increasingly so, as the always excellent Josh Greenbaum points out.  So Microsoft becomes an enigma wrapped in a riddle wrapped in a puzzle, or whatever order that’s supposed to be in.

But they should be concerned that they fell below my threshold, unlike their other competitors.  They aren’t doing anything horribly. But neither are they standing out in a market that demands razor sharp clarity due to the sheer number of competitors of all sizes in it and because Microsoft like everyone else is now competing for customer’s attention, not just with competitors. They have done a good job in making Dynamics CRM 2011 more visible than any other CRM release in their history. They are engaging analysts and influencers where they need to - as is evidenced by their November 2010 industry analysts day in Seattle. They do not slack on their analyst engagement. Nor do they fall down with a partner ecosystem that is truly the best partner ecosystem in perhaps any industry.

But that doesn’t excuse their holes. Here are a few of them. Even though they engage analysts in the way they should, they don’t provide a lot of thought leadership in CRM/SCRM - little that’s noticed in any case.  Their attempts at Social CRM are halfhearted to say the least, with not only the Gists of the world outdoing the Microsoft social connectors, but companies like CDC Software Pivotal, built strictly on a Microsoft dot Net platform, eating their lunch when it comes to solid Social CRM functionality. As far as their ability to innovate goes, while its never one person who is the sole innovator at a company, they lost Ray Ozzie, not a blow that isn’t going to be easy to recover from.

Their overall messaging I think is clear in two places - they are all in when it comes to the cloud - Yay, cloud - and they are starting to nail the consumer entertainment space with their Kinect push (a product that sold over 1 million in its first month).  Their CRM messaging needs to fit a bit better in this.

I have an idea.

Several years ago, I heard Bill Gates say that Microsoft desired to be the technology foundation for how you ran your life (a theme seems to be emerging from the Big 4 all in all). That meant the business and personal parts of your life. Microsoft needs to expand that unified theme. Sometimes you are SO big, that the core message can get lost in its different parts. Getting back to this “Gatesian” message with a laser focus, incorporating Dynamics CRM 2011 - a solid product - into it, and getting out there into the market with this message for both marketshare (right now CRM has 23,000 customers, 1.4 million users) and mind share, and the beast can be unleashed - in a good way.

Which is why they belong on the Watchlist. Really.


Wade Wegner (@WadeWegner) described Programmatically Changing the AppPool Identity in a Windows Azure Web Role in a 1/17/2011 post:

image With the Windows Azure SDK 1.3 comes full support for IIS in web roles, giving your web roles the ability to access the full range of web service features available in IIS.  This provides a lot of powerful capabilities, some of which are outlined here.

imageBy default, the AppPool runs as the NetworkService.  Go ahead and create a new Windows Azure Project, choose a web role, and hit F5 to run it in IIS.  A new AppPool is created with a GUID for the name, and you’ll see that the Identity is NetworkService.

DefaultAppPoolIdentity

In most cases this is fine, and the NetworkService identity is sufficient.  However, there are some cases where you many need to change the AppPool identity.  Here at Microsoft, for example, we have an internal proxy that manages all our network traffic.  In order for traffic to move through the proxy, the underlying identity has to be an authenticated domain user.  Consequently, any requests sent by the NetworkService result in errors when trying to access network resources (e.g. Windows Azure storage, AppFabric service namespaces, and SQL Azure).

Since we cannot proactively change the settings used to create the AppPool, we need to make the change immediately following it’s creation but before our application starts to run.  Consequently, we’ll use the OnStart() method in the WebRole’s role entry point (i.e. WebRole.cs).

First, you’ll need to add a reference to two assemblies:

  1. System.DirectoryServices
  2. Microsoft.Web.Administration (found in C:\Windows\System32\inetsrv)

Now, update the OnStart() method with the following code:

Code Snippet

  1. public override bool OnStart()
  2. {
  3. // For information on handling configuration changes
  4. // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
  5. var webApplicationProjectName = "Web";
  6. var appPoolUser = "northamerica\\wwegner";
  7. var appPoolPass = "password";
  8. var metabasePath = "IIS://localhost/W3SVC/AppPools";
  9. using (ServerManager serverManager = new ServerManager())
  10.     {
  11. var appPoolName = serverManager.Sites[RoleEnvironment.CurrentRoleInstance.Id + "_" + webApplicationProjectName].Applications.First().ApplicationPoolName;
  12. using (DirectoryEntry appPools = new DirectoryEntry(metabasePath))
  13.         {
  14. using (DirectoryEntry devFabricAppPool = appPools.Children.Find(appPoolName, "IIsApplicationPool"))
  15.             {
  16. if (devFabricAppPool != null)
  17.                 {
  18.                     devFabricAppPool.InvokeSet("AppPoolIdentityType", new Object[] { 3 });
  19.                     devFabricAppPool.InvokeSet("WAMUserName", new Object[] { appPoolUser });
  20.                     devFabricAppPool.InvokeSet("WAMUserPass", new Object[] { appPoolPass });
  21.                     devFabricAppPool.Invoke("SetInfo", null);
  22.                     devFabricAppPool.CommitChanges();
  23.                 }
  24.             }
  25.         }
  26.     }
  27. return base.OnStart();
  28. }

A few things to point out:

  • Line #6: Specify the name of your web role project as webApplicationProjectName.  This is used on line #13 to look up the newly created AppPool.

image

  • Lines #7 & #8: Update to use your own username & password.
  • Line #21: The AppPoolIdentityType of 3 means we’re providing a custom username & password.

Once you update your code and run again, you’ll see that your AppPool is updated with the new identity.

NewAppPoolIdentity

Now, whenever your application makes a request through the network, it will originate from your domain account (or whatever you specify) instead of the NetworkService.

WARNING: I should point out that there is one negative side effect that I’ve noticed when changing the identity of the AppPool programmatically – it will prevent the debugger from working normally.  I don’t (yet) know of a way to resolve this, but I’m still looking.


<Return to section navigation list> 

Visual Studio LightSwitch

Dan Moyer posted Why I Believe Visual Studio LightSwitch will be a Win on 1/9/2011 (missed when published):

In my career, I’ve worked with several Microsoft products which failed.   Sometimes you have a sense early on that a product will be a Fail.   Other times the product’s future initially looks promising but due to development delays and changing market conditions the product becomes a Fail.

image2224222I’ve been checking out Visual Studio LightSwitch the past couple months and believe it has a bright future.

This article discusses an earlier Microsoft product which LightSwitch takes a lot of ideas from and why I think Microsoft is getting it right this time.

Awareness

When I first read about LightSwitch, it caught my attention because of my experience in developing business applications and working with business application frameworks. Its goal of giving the user rapid application development, the ability to connect to multiple data sources and its support to multiple deployment environments differentiate it from other tools I’ve worked with in the past.

What really piqued my interest is seeing familiar faces and names on the Channel 9 videos and in the LightSwitch Developer’s forum. Some of these folks have been in the business application development ecosystem for a couple decades. They bring a wide and deep experience of knowing what works, and more importantly, knowing what fails.

First Steps

I downloaded and explored the example applications.

I listened to the How Do I videos and read through the Q & A on the LightSwitch Developer Center Forum.

I read some great blog postings by Michael Washington showing the ease of developing a Student Information System application using LightSwitch.

Forming an Opinion

I haven’t seen a development tool quite like LightSwitch.

Developers often speak of code smells in discussing code that doesn’t look right, which hint at deeper problems in the design of the code.

Well, I also think there are product smells. Product smells are what your intuition tells you about new product development and the sense of whether the product will become a Win or Fail.

Microsoft has had its share of developments and early release products with a ‘product smell’.

Let me give you some examples from my personal experience with Microsoft projects which were Fails. With the context established of products which failed, I’ll discuss why I think LightSwitch will be a Win.

Fifteen years ago I worked in a development group responsible for creating server management software. The company sent me to attend a vendor only conference in Redmond where Microsoft showed early bits of “OLE Management Framework”. (Hey, this was a before Microsoft decided to rename Object Linking and Embedding to COM.) That framework never saw the light of day. Parts of it may have morphed to become part of WMI, but it I saw no more of the OLE Management Framework. In my opinion, the market was not ready for that framework. The developer community had not embraced ‘OLE interfaces’ and Windows servers were not widely deployed in the enterprise. Companies had little incentive to develop management applications on top of such of a framework.

I’m sure we can all name some Microsoft projects which “had the smell”: Bob, Clippy, and most recently Microsoft Kin.

Microsoft has had a number of product and framework developments which died a slow death. There are a number of examples of projects which initially appeared to have a bright future, but after months of development, the company stopped work on the project and the bits and IP morphed into other products and product teams.

Some examples which come to mind are WinFS, Oslo and Project Quadrant, and one, which for me, hit close to home– MBF (Microsoft Business Frameworks)

LightSwitch appears different than these Failed projects.  I believe it has the potential to be a wildly successful product. Let me explain more why I believe this.

I use to work on the Microsoft Business Framework team.

In 2001, due to the investments which Microsoft was making at the time in the Business division and the MBF project, MBF appeared to have a great future. Microsoft spent millions of dollars to acquire several companies, Great Plains, Navision, and Solomon, with the intention of consolidating their business applications into one framework and one application.

Let me state my case using some clips from a news article  made in 2002:

“Microsoft is in for the long haul as far as its business applications are concerned with a 10-year development plan and a new Microsoft Business Framework to support the basket of disparate applications offered under the Microsoft Business Solutions banner. ….

The expanding technology stack is part of Microsoft Business Solutions’ goal to provide end-to-end software support for the SME business sector via an integrated application platform, and attempts to address the issue of how the division will manage and integrate its disparate mix of applications which include those from Great Plains and Navision plus its own home-grown offerings. With this proto-framework Microsoft is aiming to change the perception of what constitutes the base for an application platform, said Edwards.

If there were any doubts over how big an effort Microsoft planned to make in the applications mid-market they are rapidly being dispelled. The division has been identified as one of the seven pillars of the wider Microsoft business and while it only produces revenue of $0.5bn at the moment, a paltry sum compared to that of some of Microsoft’s other divisions, the goal is to be generating revenue of $10bn in 10 years time. Edwards says the strategy is a 10-year gamble aimed at getting mid-market companies connected…..”

I remember the mantra $10 billion in 10 years very well. With so much investment made in creating the new division and creating product teams, who was I to argue otherwise in 2002?

A few years later it was a different environment:

From a news article in 2005:

“… Microsoft Business Framework (MBF) is no more.

The new strategy is to make the various technologies that were to comprise MBS available as part of a variety of other currently shipping and soon-to-be-delivered Microsoft products.

On Tuesday, Microsoft announced internally that it had reassigned the couple hundred MBF team members to other product teams, primarily the Visual Studio and Dynamics units, inside the company. Microsoft officials made the company’s decision public on Wednesday.

MBF was to be a set of developer tools and software classes designed to ride atop the Microsoft .Net framework. MBF was developed primarily by Great Plains Software team, which Microsoft acquired in 2001. Microsoft was working to build a number of its products — including the Microsoft Business Portal, the next version of its Visual Studio .Net tool suite and its “Project Green” wave of ERP/CRM products — all on top of the MBF layer.

When Microsoft decided in August, 2004 to remove the Windows File System (WinFS) functionality from Windows Longhorn/Vista and Longhorn Server, MBF was one of the casualties
. At that point, Microsoft refused to pin a delivery-date target on either WinFS or MBF.

As of April this year, however, Microsoft officials revealed that their plan was still to deliver MBF as a standalone set of classes and libraries. Microsoft had delivered one MBF test build to about 40 customers and software developers who were experimenting with the bits. Microsoft’s goal was to deliver the final MBF framework toward the end of 2007, Darren Laybourn, general manager of MBF, told eWEEK.”

Why It’s Different This Time

I don’t think LightSwitch has a future of dimming out like MBF.

Firstly, there’s a market need for a tool like LightSwitch. This market need will drive adoption as more developers and business analysts become aware of how cost effectively they can create business applications.

The business application ecosystem needs a tool which removes the need for non-programmer Business Analysts to construct an application without requiring a deep knowledge about n-tier development, T-SQL commands, and application plumbing construction.

But, that tool needs to provide extensibility points to allow programmers to hook into the tool for custom models and to add out of box functionality. For example, a business application generator needs to allow a programmer hooks to use other .NET frameworks like Work Flow Foundation to add in functionality that’s not in the delivered product.

If Microsoft has learned anything since 2002 about the business application space, it’s that it continues to be a fragmented market and the needs of different businesses make a single solution hard to implement. After a decade, the company still has three business products—Dynamics SL, Dynamics GP, and Axapta. Ease of use coupled with extensibility points is key to the success of a product to build business applications.

Unlike MBF, LightSwitch has a growing and vibrant user community in its Beta 1 release. MBF bits were not released to the developer community, there were no online tutorials and public facing technical evangelists.

LightSwitch is providing many of the concepts which MBF evangelized back in 2002, which is why I think the LightSwitch team “gets it”.

Let me point out a few slides from a PDC 2003 presentation.

Developing Business Applications Using the Microsoft Business Framework Overview DAT340.ppt

clip_image001

clip_image002

clip_image003

clip_image004

clip_image005

Hmm… Data Entities, Entity Validation, Business Logic Abstraction, Persistent Object Data Abstraction (as in data source connections of external databases, Sharepoint, or WCF RIA Services?)

With little editing, these eight year old slides could be used today in a LightSwitch presentation.

The legacy MBF experience sure appears to be reemerging into LightSwitch.

This may well be in part because some prominent members of the LightSwitch team worked on MBF and Dynamics GP.

I mentioned earlier in this posting about the experience of the people behind LightSwitch and why it piqued my interest when I first learned about it.

Steve Anonsen and Dan Seefelt are members of the LightSwitch team. Both were key members of the Dynamics Great Plains product development and later, MBF. It certainly appears they are bringing their decades of experience of what works and what doesn’t work in business application development to LightSwitch.

Steve and Dan are very activity in the LightSwitch forum, quickly answering early adopter’s questions.

In the video Inside LightSwitch, you can watch Steve give an overview of LightSwitch architecture.

Summary

So those are a few of my opinions, with a basis of forming those opinions, of why I believe LightSwitch has the potential to ‘knock it out of the ball park’ for business application product development. This is why I’m investing some of my free time to kick the tires on the early bits and become an early adopter.

I’m curious to hear your comments.

Do you think LightSwitch has the potential to be successful like Visual FoxPro or will early adopters be the last ones out to turn off the lights?

This is the first time I’ve heard the words “Microsoft Business Framework” in years. Total vapor!


Return to section navigation list> 

Windows Azure Infrastructure

Buck Woody listed Windows Azure and SQL Azure Use Cases in a 1/18/2011 post:

image The key to effectively leveraging “Cloud Computing” or more accurately, Distributed Computing architectures like Windows and SQL Azure is to implement them where they make the most sense. This is actually good advice for any computing paradigm, but some folks believe that a particular tool should be used in all circumstances. Microsoft does not recommend that you take all of the computing resources you have on-premise and move them to a distributed architecture.

A Use Case is defined as “when or where you use a technology” and a Pattern/Practice is defined as “how you implement a technology”.  In this series of posts, I’ll cover the use cases, and I’ll also give you resources to leverage to implement them.

Windows Azure Use-Cases

Elastic Scale: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-use-case-elastic-scale.aspx [See below.]

SQL Azure Use-Cases

Apparently, Buck’s SQL Azure Use-Cases post is “under construction.”


Buck Woody described Windows Azure Use Case: Elastic Scale on 1/18/2011:

This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx

Description:

image “Elastic Scale” is a description of computing where the demands for the compute resources expands and contracts. Normally this involves increased user activity for the system, but the compute functions might also be driven by activity within the code itself. Let’s take a quick look at each.

In a centralized computing model (such as client/server), there is a finite set of resources for Compute and Storage, defined by the limits of the server architecture in place. The compute and storage resources are at a constant, assuming that they are at their maximum levels. In effect, you’re paying for and maintaining a a high level of computing whether you use it or not.

image

In a distributed computing model (such as “Cloud Computing” and Windows Azure) computing and storage resources can be added - and removed - dynamically. This means that as demand increases, you can add more resources to the pool based on a trigger of your choosing, and establish the ceiling for those resources if you want one. Conversely, as the load decreases, the resources also decrease. In this design, the use drives the capacity you pay for.

image

Implementation:

The key to implementing a distributed application is to keep it stateless. In a program, “state” is loosely defined as maintaining track of the previous operations within a program. Normally this record is kept in memory during the run of a particular program.

For instance - when you begin a bank transaction, the ATM machine takes your card, accepts your PIN, accepts the money you deposit and the withdrawals you make, and then returns your card, closing out the transaction. It maintains the “state” of your entire process, bookmarked on each end by taking your card and giving it back to you.

image

In stateless computing, the assumption is that the components within the process will come and go at various times. A good analogy here is e-mail. An e-mail system is made up of multiple parts:

  • Mail Client Software
  • Message
  • Transport
  • Server
  • Processing
  • Transport
  • Recipient(s)

When you compose an e-mail message, none of the other components are aware of your activities. When you send it, your client program is no longer aware of what is happening to the message. When the server receives it, your client software might even be powered off. The “state” is maintained by each component.

To adequately scale your workload, each component should be largely unaware of other components. That means you need to design your code such that it has more of the characteristics of an e-mail system. Keep in mind that even a bank ATM machine transaction can still be coded as a stateless application - I’m using it here as an example. In fact, very few things cannot be created in a stateless way.

You can “persist” state - that is, save it - to storage that each component can access. This is in fact the purpose of the Message Queue storage in Windows Azure. It allows one component to write a message and then leave. The next component can pick up that message and work on it. It’s similar to what you see in a restaurant. The waiter takes your order and drops it off in the kitchen, and the cook looks at each order to prepare it. In this analogy, the “message” is the meal order ticket.

Resources:


Bill Zack described Cloud Power on Ignition Showcase in a 1/18/2011 post to his Ignition Showcase blog:

image For those ISVs that want to come up to speed quickly on Microsoft’s Cloud initiatives, here is a summary of the Cloud related posts that have appeared in the Ignition Showcase blog recently.  The posts are categorized by:

    • clip_image001General Cloud
    • Public Cloud

    • Private Cloud

    1. General Cloud

    An ISV in the Age of the Cloud: It is important that we know who our constituency is. That has prompted us to think hard and long about what an ISV is.  I realize that many of you may think this is a naïve question since “everyone knows” the answer. But bear with me a bit because, thanks to the new cloud computing paradigm that is upon us, the definition needs extension to accommodate a new type of ISV the SaaS (Software as a Service) vendor.

    Journey to the Cloud: No company other than Microsoft has so many cloud solutions that you can apply to solving your business needs.  What this means for Independent Software Vendors and Partners is that we have truly entered a new generation of Information Technology. No product or service can be successful without taking the cloud into consideration.

    Cloud Computing: Cloud computing is about out sourcing your IT to get internet scale, availability and manageability.

    How Secure is the Microsoft Cloud?: As an ISV implementing an application running in Microsoft’s Cloud you will be asked how secure it is compared to running the same application in your own data center.

    Drive Demand and Close Sales for your Microsoft Infrastructure and Cloud Solutions: Live Session to learn about the NEW Cloud Power Campaign, as well as the Optimized Datacenter, Business Ready Security, and Simplify IT campaigns. 

    2. Public Cloud

    Hands on Labs on Moving Applications to the Cloud: Based on the recently published Microsoft Patterns and Practices guide Developing Applications for the Cloud on the Microsoft Windows Azure™ Platform.

    High Performance Computing – On-Premise and In the Windows Azure Cloud: For HPC applications requiring extremely low latency Microsoft has for some time had a product, HPC Server 2008 that provides an on-premise architecture  Now Windows Azure can participate in HPC solutions.

    Mark Russinovich on Windows Azure: Mark discusses why you should move to the cloud and the internal architecture of Windows Azure.

    The Windows Azure Platform Appliance for Everyone? Not!: In the world of Windows Azure there is a major misconception that the Windows Azure Platform Appliance announced at Microsoft's Worldwide Partner Conference (WWPC) in July is Microsoft’s answer to having your own Private Cloud. This is not correct.

    Windows Azure for IT Pros: There is a lot of developer-focused information on the web about the Windows Azure Platform.  This blog post series is one designed for the non-developer IT professional. 

    Weekly Azure Updates from OakLeaf Systems: Harnesses Visual Studio LightSwitch Windows Azure Infrastructure Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds Cloud Security and Governance Cloud Computing Events and Other Cloud Computing topics.

    Contemplating a Move to the Cloud?: Are you an ISV contemplating moving your product or service to the cloud? Cumulux is an ISV that has their own line of products and they also have an interesting methodology for helping other ISVs and customers to move their applications and services to the Windows Azure cloud.

    Dynamics CRM 2011: xRM Cloud Acceleration Lab Videos: While hosting xRM Cloud Acceleration Lab (week of 6 th Dec’10; Redmond, WA), I got an opportunity to chat with several ISVs, building their Cloud Offerings using Dynamics CRM 2011 Online.

    SQL Azure Raises the Cloud Database Bar: SQL Azure takes the power of SQL Server to the cloud allowing you to leverage relational data as you do on-premise SQL Server data.

    Microsoft is “All In” Cloud Expo in New York: Lest there be any doubt that Microsoft is “All In” the cloud, we recently were one of the primary sponsors of Cloud Expo at New York’s Javits Convention Center. Highlight of the expo was a complete Windows Azure compute and storage container.

    Microsoft Releases Open Source Tools for the Cloud: Microsoft makes commitment to Openness in the Cloud: Releases new OSS tools for Windows Azure New tools and SDKs expand choices for PHP developers targeting Windows Azure.

    xRM Cloud Acceleration Lab Video: Permuta Technologies: During xRM Cloud Acceleration Lab (week of 6th Dec'10, Redmond, WA), I got an opportunity to chat with David Milton, CTO, Permuta Technologies , regarding their cloud strategy using Dynamics CRM.

    Developing SharePoint Applications that Run On-Premise or in the Cloud: If you are an ISV considering moving an on-premise SharePoint application to SharePoint Online or interested in developing applications that can run in either mode this webcast is one that you should not miss.

    Free Windows Azure Webinars for ISVs: If you are an ISV or a partner you know that you had better be thinking about how your product of service will continue to exist and be successful in the new cloud paradigm.

    3. Private Cloud

    Microsoft Private Cloud Announced!: At TechEd Europe we announced our Private Cloud  solution: Microsoft Hyper-V Cloud . Hyper-V Cloud is a set of programs and offerings that makes it easier for businesses to build their own clouds.

    You May Have a Private Cloud Already!: Worried about having to implement lots of new software to build your private cloud? You may not have to. Enterprises that have standardized on Windows Server 2008 R2 are, like as not, already have the tools to build a private cloud.

    An Outside View of Microsoft’s Private Cloud Initiative: This article gives an objective look at Hyper-V Cloud from a non-Microsoft source.  

    Hyper-V Survival Guide: This is for the ISV who wants to know more about Hyper-V, the basis for Microsoft’s Private Cloud offering, Hyper-V Cloud.

    Webcast: Delivering VKernel Chargeback Solutions for Microsoft Hyper-V Cloud Initiatives: Windows Server 2008 R2 with Hyper-V and the System Center portfolio afford IT organizations flexibility in the dynamic delivery of IT services to end users. With the recent release of the System Center Virtual Machine Manager Self Service Portal solution virtualized private cloud initiatives can now more effectively provide self-service provisioning and business models.

    Cloud Got you on Edge?: If you are an ISV implementing a Software as a Service application that serves up blob data to globally remote locations you need to be aware that Microsoft has a growing list of 20 physical edge nodes that can be used to cache data locally.

    More Clouds in New York: In a previous post we told you about how New York City has agreed to consolidate 45 separate Microsoft contracts into one $20 million multi-agency deal.

    Thanks for promoting the OakLeaf blog, Bill.


    David Linthicum posted Testing for a Loosely Coupled Architecture on 1/18/2011 to ebizQ’s Where SOA Meets Cloud blog:

    image Been thinking. Considering that loose coupling is a foundation of SOA, and I would say cloud computing as well, perhaps it's a good idea to break down loose coupling into a few basic patterns: Location independence, communication independence, security independence, and instance independence.

    image Location independence refers to the notion that it matters not where the service exists, the other components that need to leverage the service can discover it within a directory and leverage it through the late binding process. This comes in handy when you're leveraging services that are consistently changing physical and logical locations, especially services outside of your organization that you may not own such as cloud-delivered resources. Your risk calculation service may exist on premise on Monday and within the cloud on Tuesday, and it should make no difference to you.

    Dynamic discovery is key to this concept, meaning that calling components can locate service information as needed, and without having to bind tightly to the service. Typically these services are private, shared, or public services as they exist within the directory.

    Communications independence means that all components can talk to each other no matter how they communicate at the interface or protocol levels. Thus, we leverage enabling standards, such as Web services, to mediate the protocol and interface difference.

    Security independence refers to the concept of mediating the difference between security models in and between components. This is a bit difficult to pull off, but necessary to any SOA. To enable this pattern, you have to leverage a federated security system that's able to create trust between components, no matter what security model is local to the components. This has been the primary force behind the number of federated security standards that have emerged in support of a loosely coupled model and web services.

    Instance independence means that the architecture should support component-to-component communications using both a synchronous and asynchronous model, and not require that the other component be in any particular state before receiving the request, or, the message. Thus, if done right, all of the services should be able to service any requesting component, asynchronously, as well as retain and manage state no matter what the sequencing is.

    The need for loosely coupled architecture within your cloud computing solution is really not the question. If you leverage cloud computing correctly, other than in some rare circumstances, you should have a loosely coupled architecture. However, analysis and planning are also part of the mix...understanding your requirements and how each component of your architecture should leverage the other components of your architecture. Leverage the coupling model that works for you.


    Cade Metz reported “RightScale does Azure” in a deck for his Cloud juggler eyes Microsoft's floating VMs post of 1/13/2011 to The Register (missed when published):

    image RightScale – the southern California startup whose eponymous online service lets you juggle so-called infrastructure clouds – is preparing tools for managing Microsoft's Azure cloud as well.

    imageAzure isn't an infrastructure cloud. It's what the world calls a platform cloud. Rather than offer on-demand access to raw compute power and storage – as, say, Amazon does – it serves up development tools and other services that let you build and host applications online without diving into those underlying infrastructure pieces. But Microsoft has long said that Azure will offer limited access to raw VMs for those who want to test Windows apps, and it's this infrastructure cloud–like piece that RightScale will help you manage.

    Last month, Microsoft introduced a beta of the service, known as Azure "VM role".

    image"[Azure] is a very different animal for us. Up until recently, it wasn't even an option to support it. If you have a pure platform-as-a-service, there really isn't a role for RightScale," RightScale CEO Michael Crandell tells The Register. "But [VM role] is something that exposes an infrastructure as a service, so we're off to the races."

    RightScale began as a means of managing Amazon's AWS infrastructure cloud, but after Amazon introduced its own web interface, Crandell and company expanded the service to additional "public clouds", including services from Rackspace and GoGrid, as well as platforms that underpin "private clouds" behind the firewall, including Marten Mickos's Eucalyptus and the Cupertino-based Cloud.com. The idea is that you can you use one online service to manage applications across multiple clouds – both public and private.

    "Our users run their servers in Amazon, RackSpace, GoGrid, Eucalyptus, etc. That's the cloud they're using. RightScale is the management platform they use to manage all these cloud resources," RightScale CTO Thorsten von Eicken once explained to The Reg. "In the end, our users are in control of their servers, disk volumes, IP addresses as they get them from the infrastructure cloud provider. We just enable them, make it easier, save time, reduce risk."

    As you might expect, the use of private clouds is still, well, largely theoretical. "There aren't really production implementations – meaning people aren't running production apps that way," Crandell tells us. "There are a number of projects in the works, ranging from pilot projects to actual build-outs. But to be honest, none of them have reached the stage where they're actually running production apps." But the use of public clouds is growing. The company now has customers in 30 countries worldwide.

    Like other cloud efforts, RightScale says it's receiving huge interest in Japan. In December, it signed a deal with the Japan-based consultant Kumoya that saw the cloud-happy consultant become an authorized RightScale distributor. Just this week, Cloud.com – whose private-cloud platform is supported by RightScale – signed its own deal wih Kumoya. "We're seeing significant traction in the Asian market across the board," Cloud.com marketing chief Peder Ulander tells The Reg. "Japan is one of the most agressive markets."

    Microsoft's Azure VM role isn't exactly analogous to the VMs you get on Amazon's AWS. It won't run, say, Linux. "This will be a version of infrastructure-as-a-service that's constrained," Microsoft director of platform strategy Tim O'Brien told us last year. "You won't be able to load up any arbitrary service you want to load up. We're going to give you a constrained base Windows 2008 image, and it's constrained in a way that Azure knows how to manage it."


    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

    Charlton Barreto describes Intel Cloud Builders’ approach to private and hybrid clouds in his Cloud made easy post:

    image The world is exploding with a growing number—and variety—of computing devices. By 2015, we expect to see 15 billion devices and an additional one billion users worldwide. This growing user base will continue to demand, and expect, a wide assortment of applications and on-demand services—all with a fantastic user experience.

    How will we get there? Client-aware cloud computing: Internet-based computing that provides shared resources, software applications, and information to computers and other devices on-demand—similar to your electric service.

    'Client-aware’ defined

    image Imagine you’re low on battery power; no problem. The client-aware application knows and has already saved your critical data. Intel Software and Services Group (SSG) pioneered the Web application programming interface (API) that adds this benefit to the cloud.

    In a client-aware environment, the cloud takes advantage of client capabilities to optimize application delivery and end-user experience across a range of devices in a secure fashion. Client-aware solutions adapt seamlessly to your devices regardless of the type of client system you are using. You’re in control. You are no longer tethered to a single computer or network. Change computers, and your applications and documents follow you through the cloud. Move to a mobile device, and your applications and documents are still available. There's no need to buy a special version of a software program for a particular device, or to save your document in a device-specific format.

     

    Intel Cloud Builders: The path to Intel’s ‘Cloud 2015’ vision

    With many IT organizations evolving to meet these user expectations of a wide assortment of applications and on-demand services, Intel SSG and Data Center Group (DCG) formed Intel® Cloud Builders—an industry initiative where leading systems and software solution providers provide best practices and practical guidance on how to deploy, maintain, and optimize a cloud infrastructure. The program’s goal is to provide a path to Intel’s “Cloud 2015” vision.

    Intel Cloud Builders will deliver proven, detailed reference architecture solutions, and are designed to address IT challenges and the requirements of the Open Data Center Alliance. These best practices can be used now to build and enhance clouds that are more simplified, secure, and efficient.

    image

    “Engaging with the program benefits Microsoft and our mutual customers because we jointly create reference architectures so our customers can get a tried and true solution,” says Bill Laing, Microsoft corporate vice president, Server and Cloud Division. “Together, we're really about delivering more value to our customers at lower cost over time.”

    “HP is a proud member of the program,” says Steven Dietch, vice president, Marketing, Cloud Infrastructure, Hewlett-Packard. “We're collaborating very tightly, particularly from an engineering perspective where ultimately the end goal is to make it easier for customers to deploy cloud solutions whether that is in a private or public domain.”

    Simon Crosby, CTO, Datacenter and Cloud Division, Citrix Systems, Inc., said, “Our participation in the program will provide customers access to a stable, tested starting point for deploying cloud infrastructures built on state-of-the-art Intel® Xeon® hardware, and cloud solutions from Citrix. Citrix will build on these reference architectures to enable our customers to deliver scalable cloud services."

    Taking the ‘crazy’ out of cloud design, deployment, and operation

    In short, Intel Cloud Builders is the vehicle by which Intel provides proven solutions that enable leading IT organizations and system and solutions providers across the industry to make the transition to cloud computing.

    The goal with Intel Cloud Builders is to show an enterprise or service provider infrastructure IT engineer how to design and operate a cloud, and make the transition to cloud computing simpler, safer, and more cost-effective.

    In addition to addressing the challenge of getting a basic cloud up and running, our expanded scope includes usage models that show IT how to build a power-efficient cloud, a set of trusted pools, on-boarding, secure cloud access, scale out storage, unified networking, client-aware computing, and more.

    The result is a set of detailed reference architectures—designed to address the evolving usage requirements of the Open Data Center Alliance, which should make for a few less “crazy” IT people in the world.

    Software optimization makes Intel solutions shine

    For the datacenter, SSG works closely in optimizing many large scale datacenters and internet applications—Web apps, social media, video, etc. Examples include the work done with Facebook and software optimizations that led to a six times reduction in image processing at Alibaba—a global leader in e-commerce for small businesses—and a 50 percent increase in power density at Baidu—a Chinese search engine for websites, audio files, and images.


    <Return to section navigation list> 

    Cloud Security and Governance

    David Linthicum asserted “Legal subpoenas to cloud providers serve as reminders that you lose control and even awareness of who can access your data” in the deck for his The government is driving some people away from the cloud post of 1/18/2011 to InfoWorld’s Cloud Computing blog:

    image Paul Carr from TechCrunch did a good job making the case why some of us may want to reconsider blanket uses for the cloud: "I've been growing increasingly alarmed by stories such as the U.S. government subpoenaing Twitter (and reportedly Gmail and Facebook) users over their support of WikiLeaks. The casual use of subpoenas, including against foreign citizens is worrying enough -- the New York Times says more than 50,000 'national security letters' are sent each year -- but even more concerning is the fact that often these subpoenas are sealed, preventing the companies from notifying the users they affect."

    image In other words, you're putting your personal data on a cloud provider, and the government can go directly to it for that data, bypassing you altogether. While you might think your cloud provider would stand up to such requests, most are legally bound to hand over the information.

    As the whole notion of the cloud is to turn over your calendars, emails, documents, and business data to somebody you don't control, you have to understand the accompanying risk. Thus, many organizations are moving back to client-based email, calendaring, and document storage, understanding that at least the subpoena will come to them, and not their cloud provider.

    As the argument goes, if you don't have anything to hide, then you should not be worried. I don't think anyone scared away from the cloud by the recent events are criminals -- they're people who'd rather not have their personal or corporate information accessed without their knowledge and permission. It's all about who has ultimate control.

    The cloud will continue to be a trade-off. If you use a cloud service, you give up control for efficiency. The authorities will get what they need, when they need it, and from whomever has it.

    That said, the likelihood of your being a target of this type of legal action is pretty small if you're operating within the law and/or don't tee off somebody who may want to sue you. I would not recommend that anyone retreat on cloud computing for that reason alone. However, it does make you think.


    <Return to section navigation list> 

    Cloud Computing Events

    Rob Gillen (@argodev) will present Cloud Services: Beyond The Buzz, a Planet Technologies Event, on 1/20/2011 from 11:00 AM to 12:00 Noon PST:

    Everyone is talking about the cloud...

    image But, are they making sense? Rob Gillen, a leading cloud expert with Planet Technologies, is hosting a half hour webcast entitled, Cloud Services: Beyond The Buzz, on Thursday, January 20, 2011.

    image As a solutions architect with Planet for more than ten years, Rob works every day with cloud technologies. Hear his tales from the front lines with examples of work he's been involved with and case studies citing real issues addressed and solved within the cloud.

    He’ll discuss the many ways cloud technologies are impacting:

    • Moderate-Scale High Performance Computing
    • Entryway to Larger-Scale Computing
    • Data Distribution

    He'll also share his predictions for where he thinks cloud services are headed in the future. You won’t want to miss this opportunity to ask questions! If you'd like to hear from Rob Gillen before the webcast, visit his blog at http://gillenfamily.net.

    About Planet Technologies
    Planet Technologies is an international IT services and business consulting firm with expertise in the integration of a host of technologies and data center solutions for the public sector, service providers and enterprise clients.

    Join the conversation. Click to Register Now!

    Read Rob’s background article about Digital Forensics and the Cloud here.

    Rob is well known to the Azure community for his performance analyses of Windows Azure tables and blobs.


    The Windows Azure Team announced on 1/17/2011 a FREE Partner Webcast: Cloud Computing and Application Management Opportunities for IT Professionals - Tuesday, January 25, 2011, 8-9:00AM PT:

    image

    Calling all partners:  if you'd like to know more about the opportunities that the Windows Azure platform presents for IT Pros, then you won't want to miss our upcoming FREE Academy Live webcast: Cloud Computing And Application Opportunities for IT Professionals. 

    This level 200 course will be presented by David Aiken, Worldwide TSP, Windows Azure, and will provide an in-depth look into how IT Pros can deploy and manage applications on the Windows Azure Platform.  Click here to learn more and register for the event.

    Academy Live webcasts are one-hour online events for Partners and Microsoft employees using Microsoft Office Live Meeting. Before the webcast, please ensure you have downloaded the latest version of Microsoft Office Live Meeting 2007.


    David Pallman announced on 11/18/2011 that he’ll be Speaking at San Diego .NET User Group on Windows Azure Best Practices on 1/20/2011 at 6:00 to 8:30 PM PST:

    image This Thursday January 20th I'll be speaking at the San Diego .NET User Group - Architecture SIG meeting. My topic will be "Windows Azure in the Real World: Best Practices and Migration Tips."

    Architecture SIG Meeting - Special Date/Time
    Thursday, January 20th (6 pm - 8:30 pm)

    image

    Windows Azure in the Real World: Best Practices and Migration Tips

    Summary:
    Cloud computing with Windows Azure is exciting but it is vital to approach it correctly. In this talk, David Pallmann will share best practices and lessons learned from real-world use of Windows Azure, including application migration tips and stories from the trenches. If you want to succeed in your adoption of Windows Azure, this practical information will help you learn from the successes (and mistakes) of others.

    Speaker: David Pallmann
    David Pallmann is a Windows Azure MVP and author of the upcoming book, The Azure Handbook. He is GM of the App Dev practice at Neudesic, a national Microsoft SI partner, where he leads cloud technical readiness, IP, and business development.

    image When and Where
    We'll be meeting on the 4th floor of the Microsoft La Jolla office. Pizza will be available at 6:00 PM. The meeting will start at 6:30.

    Address:
    9255 Towne Centre Dr., San Diego, CA 92121 Map
    PLEASE NOTE THAT PARKING ARRANGEMENTS HAVE CHANGED. YOU WILL HAVE TO PAY FOR PARKING IF YOU USE THE BUILDING LOT. WE SUGGEST YOU PARK ON THE STREET TO THE NORTH OF THE BUILDING OR IN THE UTC SHOPPING CENTER.


    Microsoft Tech*Ed Middle East 2011 will take place on 3/8 to 3/10/2011 in Dubai, UAE. You can check out the track owners’ bios here. Jon Bagley heads up the Cloud Computing and Online Services (COS) track:

    image

    Software-plus-services is the next logical step in the evolution of computing. It represents an industry shift toward software design that is neither exclusively PC- nor browser-centric and blends traditional client-server architecture with cloud-based software delivery. The Cloud Computing & Online Services track provides information about Microsoft technology and innovation in software-plus-services. Learn about enterprise-ready software services from Microsoft Online Services, such as Microsoft® Exchange Online, Microsoft® SharePoint® Online, Microsoft® Office Communications Online, and Microsoft Dynamics® CRM Online. This track also provides information about the Azure™ Services Platform, where developers can take advantage of an Internet-scale cloud services platform hosted in Microsoft data centers to build new applications in the cloud or extend existing applications.

    Session names and descriptions weren’t available when this post was published.


    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Todd Hoff reported the availability of a Paper: Relational Cloud: A Database-as-a-Service for the Cloud in a 1/18/2011 post to the High Scalability blog:

    The Relational Cloud Project is an effort by a group of researchers at MIT to investigate technologies and challenges related to Database-as-a-Service within cloud-computing. They are trying to figure out how the advantages of the DaaS (Database-as-a-Service) model, that we've seen arise in other areas like OLAP and NoSQL, can be applied to relational databases. The DaaS advantages as they see them are: 1) predictable costs, proportional to the quality of service and actual workloads, 2) lower technical complexity, thanks to a unified and simplified service access interface, and 3) virtually infinite resources ready at hand.

    image An interesting description of their approach is explained in the paper Relational Cloud: A Database-as-a-Service for the Cloud. From the abstract:

    This paper introduces a new transactional “database-as-a-service” (DBaaS) called Relational Cloud. A DBaaS promises to move much of the operational burden of provisioning, configuration, scaling, performance tuning, backup, privacy, and access control from the database users to the service operator, offering lower overall costs to users. Early DBaaS efforts include Amazon RDS and Microsoft SQL Azure, which are promising in terms of establishing the market need for such a service, but which do not address three important challenges: efficient multi-tenancy, elastic scalability, and database privacy. We argue that these three challenges must be overcome before outsourcing database software and management becomes attractive to many users, and cost-effective for service providers.

    The key technical features of Relational Cloud include: (1) a workload-aware approach to multi-tenancy that identifies the workloads that can be co-located on a database server, achieving higher consolidation and better performance than existing approaches; (2) the use of a graph-based data partitioning algorithm to achieve near-linear elastic scale-out even for complex transactional workloads; and (3) an adjustable security scheme that enables SQL queries to run over encrypted data, including ordering operations, aggregates, and joins. An underlying theme in the design of the components of Relational Cloud is the notion of workload awareness: by monitoring query patterns and data accesses, the system obtains information useful for various optimization and security functions, reducing the configuration effort for users and operators.

    Related Articles

    The authors presented a “Relational Cloud: a Database Service for the cloud” session in CIDR 2011’s Cloud Services section on 1/11/2011.


    Michael Coté reported OpenStack storage at Internap – OpenStack’s first production use – Brief Note in a 1/18/2011 post to his Redmonk blog:

    image The open source cloud platform OpenStack has it’s first production use outside of the founding partners of Rackspace and NASA. Internap has launched, though still in beta, a cloud storage offering called XIPCloud that uses the OpenStack Object Storage.

    The long road to adoption

    image While OpenStack was launched last year to much fanfare and attention (see the interesting, public community tracking they do), parts of the open source project is still in development so there haven’t been uses in the field (other than at Rackspace) of the stack. While Internap’s use is just of one part of the Open Stack, it’s a “mile stone,” as they say for the highly regarded stack…that hasn’t seen much production use as of yet.

    image By comparison, Eucalyptus which is often seen as the market-share foe of OpenStack recently released a press release saying “one in five of the Fortune 100 started a Eucalyptus Cloud in 2010.” Eucalyptus being bundled in Ubuntu certainly don’t hurt either.

    When launching OpenStack, Rackspace admitted that they wanted to get the project started before the product was fully finished to get more people involved in the community – a move RedMonk thought was good for open spirit. With plenty of business in hand, Rackspace is looking at the long-term play for becoming “The New Linux” as El Reg so graciously dubbed the project in a rare charitable moment of headlinery.

    Build a cloud business

    I spoke with folks from Internap and Rackspace last Friday. Internap’s Scott Hrastar said they’d been working with the OpenStack crew for the past 4 months to get this offering up and running, integrating to the Internaps back-end billing and customer systems, I’d guess, given that those systems are not part of the general OpenStack offering. He also said they had some help from some of the original Nova/NASA coders.

    Scott said part of the appeal of OpenStack was the ability to build differentiating services on-top of the basic offering – presumably, the open source nature will allow Internap to do this more easily than with proprietary offerings. For existing hosting companies like Internap, “cloud” brings a major fear of being “dumb infrastructure” (akin to the “dumb pipes” and “stupid networks” telcos fear becoming): swappable IT services that have no way of differentiating other than (low) price. While we didn’t go over what these unique services would be, presumably Internap things it can build additional value on-top of the default Object Storage offering, helping them gain and retain customers.

    More
    • Nancy Gohring at IDG coveres the announcement, including: “There has also been strong interest from financial services companies, he said. They tend to have very large operations so the deployments take time, he said. OpenStack expects to make more announcements about such large users in the coming months.”
    • Cade Metz covers the story for The Register, adding: “Internap has not actually contributed to the OpenStack project, but it intends to do so. It’s also evaluating the Nova codebase, but at this point, Hrastar says, the company has no firm plans to offer a public service that serves up processing power.”
    • The official press release.

    Disclosure: Rackspace is a client, as is Eucalyptus, VMWare, IBM, Cloud.com, Microsoft.com, and many others working in this space.


    Klint Finley [pictured below] asked When Should You Use Hadoop? in a 1/16/2010 post to the ReadWriteCloud:

    image RedMonk analyst Stephen O'Grady tackles the question "What Factors Justify the Use of Apache Hadoop?" O'Grady cites two of the most common criticisms of Hadoop: 1) Most users don't actually need to analyze big data 2) MapReduce is more complex than SQL. O'Grady confirms these criticisms, but finds Hadoop useful anyway.

    Hadoop logo 150x150O'Grady acknowledges that volume isn't the only factor in the complexity of a dataset. "Larger dataset sizes present unique computational challenges," writes Grady. "But the structure, workload, accessibility and even location of the data may prove equally challenging."

    image RedMonk uses Hadoop to analyze both structured and unstructured datasets. There are a number of other tools the firm could use to analyze the data, so why Hadoop? O'Grady responds that datasets companies use aren't big data yet, but they are growing rapidly.

    O'Grady says that RedMonk uses Big Sheets and Hive to work with Hadoop and avoid using Java to write queries.

    Cloudera recently published an announcement about how the company Tynt is using Cloudera's Hadoop distribution. Tynt is a web analytics company that processes over 20 billion viewer events per month - over 20,000 events per second. Prior to adopting Hadoop, Tynt was adding multiple MySQL databases per week to deal with the data.

    Another example of a company that's using Hadoop is Twitter. We covered Twitter's use of Hadoop here. Twitter needs to use clusters for its data. The amount of data it stores every day is too great to be reliably written to a traditional hard drive. Twitter's also found that SQL isn't efficient enough to do analytics at the scale the company needs.

    Like RedMonk, Twitter avoids writing Java queries. However, it uses Pig instead of Hive.

    Twitter is working with 12 terabytes of new data per day, significantly more than RedMonk uses. None the less, both companies are making good use of the technology.

    How have you used Hadoop? Have you ever found that it was too big for a project that you tackled? If so, what did you end up using instead?

    See also: Getting Started with Hadoop and Map Reduce


    <Return to section navigation list> 

    0 comments: