Sunday, April 25, 2010

Windows Azure and Cloud Computing Posts for 4/24/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

CloudBerryMan’s Manage Azure Blob Storage with CloudBerry Explorer post of 4/25/2010 to CIO.com’s Cloud Computing News blog announces:

CloudBerry Lab has released CloudBerry Explorer v1.1, an application that allows users to manage files on Windows Azure blob storage just as they would on their local computers.

CloudBerry Explorer allows end users to accomplish simple tasks without special technical knowledge, automate time-consuming tasks to improve productivity.

Among new features are Development Storage and $root container support and availability of the professional version of CloudBerry Explorer for Azure.

Development storage is a local implementation of Azure storage that is installed along with Azure SDK. Users can use it to test and debug their Azure applications locally before deploying them to Azure. The newer version of CloudBerry Explorer allows working with the Development Storage the same way you work with the online storage.

$root container is a top level container in Azure Blob Storage account. All other containers are created within this top level container. With the newer release of CloudBerry Explorer users can work with $root container just like with any other container.

With the release 1.1 CloudBerry Lab also introduces the PRO version of CloudBerry Explorer for Azure Blob storage. This version will have all the features of CloudBerry S3 Explorer PRO but designed exclusively for Windows Azure. CloudBerry Explorer PRO for Azure will be priced at $39.99 but for the first month CloudBerry Lab offers an introductory price of $29.99.

In addition CloudBerry Lab as a part of their strategy to support leading Cloud Storage Providers is working on the version of CloudBerry Backup designed to work with Azure Blob Storage. This product is expected to be available in May 2010.

CloudBerry Explorer is designed to work on Windows 2000/XP/Vista and Windows 7. Microsoft PowerShell command line interface allows advanced computer users integrate Amazon S3 storage access with other routines.

CloudBerry Explorer for Windows is freeware. CloudBerry Explorer PRO costs $39.99 (US). Users have to continue paying their Windows Azure charges directly to Microsoft.

For more information & to download your copy, visit our Web site at:

CloudBerry Explorer Freeware for Windows Azure http://www.cloudberrylab.com/default.aspx?page=explorer-azure

CloudBerry Explorer PRO for Windows Azure http://www.cloudberrylab.com/default.aspx?page=explorer-azure-pro

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

G. Garza’s Microsoft SQL Azure – A Database for the Cloud post of 4/24/2010 to the Windows 7 News & Tips blog provides a brief, semi-technical description of SQL Azure and an embedded 00:04:01 video:

If you are a SQL database administrator or developer, Microsoft Azure will give you with multiple database features normally found on your server. Cloud based computing has received a lot of publicity. Microsoft is presenting a SQL Database called Azure that is a cloud-based relational database service built on the back of existing SQL Server technologies like Sql Server 2005, or SQL Server 2008. It provides a scalable, highly available, and multi-tenant service hosting a database by Microsoft on the cloud. For IT database managers, SQL Azure Database helps to address the problems associated with the deployment and distribution of multiple databases.

Among the things that developers do not have to do is to install, setup, patch or manage any software. The high availability and fault tolerance is a part of the database service, and there is no physical administration required.

SQL Azure Database supports Transact-SQL (T-SQL), a standard in the database development environment. Customers can use their existing knowledge in T-SQL development which is the standard relational data model. The model is familiar for its symmetry and coordintaion with existing on-premises databases. When database administrators use SQL Azure they can help reduce costs by integrating their programs with existing toolsets. This will provide symmetry with databases that reside on-premises as well as on cloud databases.

What do you get with SQL Asure?

  • You can create scalable, and custom web applications. These are especially needed in a small to mid-size businesses, hobbyist, and startups.
  • You can develop corporate departmental applications
  • You can make packaged line-of-business applications. These are attractive to traditional, SaaS ISVs and for custom developers.
  • You can also bring multiple data sources together that are in the Cloud to enable secure access from multiple locations, desktop and/or devices.

James Coenen-Eyre (@bondigeek) offers a glowing third-party review of OData in his OData – “He who dies with the most data wins” post of 4/19/2010:

OData

I have been digesting some of the OData videos from Mix10 and gaining some clarity on just what it is. Hopefully this post will give someone else some clarity as well.

“What I find especially cool about OData is that it realises the possibility of bringing together disparate data sources in a consistent manner thus simplifying the ability to create an application that combines data from different sources seamlessly.”

So what is OData? Well it is closely related to WCF Data Services which at one point were called ADO.NET Data Services and originally had a code name of Astoria. You can get the history over at Wikipedia here.  The official website can be found here www.odata.org and it’s a really nicely laid out site. Simple but informative. I think Microsoft are going for the OpenSource angle here with the look and the content and I would say they have succeeded. The more cynical out there might see this as somehow sinister. I choose to see this as Microsoft maturing and speaking the language that developers (in particular OpenSource developers) want to hear.

Ok you still haven’t answered my question. What is OData?

Well as it says over there, it’s the Open Data Protocol.

Having watched the videos, ferreted around a bit and used WCF and ADO.NET plenty over the years I see the OData Protocol as being the glue that is needed to bring together diverse sets of data in an Open and Standard way.

It’s built on Open Standards after all; REST, HTTP, XML,  Atom and JSON.

It lets us consume data in a standard way on both the client and server and over HTTP. Even cross domain calls are easy.

What I find especially cool about OData is that it realises the possibility of bringing together disparate data sources in a consistent manner thus simplifying the ability to create an application that combines data from different sources seamlessly.

And it doesn’t end there. It does so in a web friendly way giving us access to Read/Write data using standard HTTP urls with as fine a grain of control as we require and utilising standard web security protocols. …

Jamesy continues with “jQuery Support,” “Data Mashups” and “Developer Support” topics, and then concludes:

I couldn’t possibly cover everything there is to about OData in this article but I would encourage you to check out the links throughout this article and explore. This is perhaps the most exciting Data Protocol I have seen in quite some time. I say that is it builds beautifully on existing tried and true technologies like LINQ, WCF and open standards.

Vitkaras reached Part 4 of his WCF Data Services Expressions series on 4/16/2010 with Data Services Expressions – Part 4 – Accessing properties, which I missed when posted:

In this part we will talk about accessing properties in the expressions. WCF Data Services may need to access a property value in many different places in the query, but it always uses the same way to do so. Below I’ll use the $filter as an example, but the same would apply to any other place in the query (for example $orderby, $select and so on).

Property metadata

In the WCF Data Services world the shape of the data is defined using metadata classes like ResourceType and ResourceProperty. To get a general idea how to do that, take a look at this series by Alex. Note that even if you’re not using a custom provider, underneath both the built-in providers will define the metadata in the same way, so there’s no hidden magic there.

Currently WCF Data Services uses three kinds of resource types: Entity types, Complex types and Primitive types. Each resource type is defined in the metadata by an instance of ResourceType class.

Both entity and complex types can define properties. A property can be of any resource type (with some limitations, but that’s for another topic). Each property is defined in the metadata by an instance of the ResourceProperty class.

The value of a property is either a primitive type, instance of an entity type, instance of a complex type or null. The actual CLR type of the property value is determined (and defined) by the resource type of the property. Each resource type has an instance type which is the CLR type used to store an instance of that resource type. For example a primitive resource type Edm.String, has instance type System.String. For entity and complex types, the instance type is defined by you in the metadata, usually it’s some class. …

Vitek continues with sample source code.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Jeffrey Schwartz reports Microsoft Set To Release ADFS 2.0 (formerly codenamed “Geneva”) in this 4/23/2010 post to Virtualization Review:

Microsoft is expected to release Active Directory Federation Services 2.0, a key add-in to Windows Server 2008 that promises to simplify single sign-on authentication to multiple systems and the company's cloud-based portfolio.

ADFS 2.0 (formerly code-named "Geneva"), which provides claims-based authentication to applications developed with Microsoft's recently released Windows Identity Foundation (WIF), will be available "in any day," said J.G. Chirapurath, senior director in Microsoft's Identity and Security Business Group, in an interview.

Microsoft's claims-based Identity Model, implemented in the Windows Communication Foundation of the .NET Framework, presents authentication schema such as identification attributes, roles, groups and policies and a means of managing those claims, as tokens. Applications built by enterprise developers and ISVs based on WIF will also be able to accept these tokens.

ADFS 2.0 likely will prove important to Microsoft's overall cloud computing vision. Several Microsoft officials and outside observers believe that as enterprise customers, ISVs and software-as-service providers add ADFS 2.0 to their Windows Server environments, it will remove a key barrier to those reluctant to deploy their apps to the cloud.

Pass-through authentication in ADFS 2.0 is enabled by accepting tokens based on both the Web Services Federation (WSFED) and Security Assertion Markup Language (SAML) standards. While Microsoft has long promoted WSFED, it only agreed to support the more widely adopted SAML spec 18 months ago.

Many enterprises have expressed reluctance to use cloud services, such as Microsoft's Windows Azure, because of security concerns. "Security [issues], particularly identity and the management of those identities, are perhaps the single biggest blockers in achieving that nirvana of cloud computing," Chirapurath said.

Because ADFS 2.0 is built into Windows Azure, organization[s] can now offer claims-based tokens that will work with both Windows Server 2008 and Microsoft's cloud-based services, enabling hybrid cloud scenarios. A user can authenticate to Azure or Windows Server and WIF-enabled applications using InfoCards built into Windows 7. [Emphasis added.]

"Just like e-mail led to the explosive use of Active Directory, Active Directory Federation Services will do the same for the cloud," Chirapurath said.

Danny Kim, CTO of Boston-based Microsoft Certified Gold Partner FullArmor, who has stress-tested ADFS 2.0, said it is ready for production.

"ADFS 2.0 does the linking of identities back to our server space and our cloud-based services and we have one version that works across all of those environments," Kim said. One major financial services firm wants to roll it out right away to allow its users to authenticate applications running on FullArmor's Windows Azure-based applications, Kim added.

"This is a security-conscious company that has said unless security is guaranteed, we are not going to deploy these services in the cloud," Kim said. ADFS maps the user's token into Active Directory, which is passed through to other ADFS-enabled systems, he explained. …

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Brian Galicia of the Microsoft Dynamics CRM Oline Team reports New innovations with CRM Online from Convergence 2010 on 4/25/2010:

image

Press release

1 - May 2010 Service Update:

Expanded language support for Microsoft Dynamics CRM Online. The May 2010 service update is the first wave for moving Microsoft Dynamics CRM Online into international markets with multilingual capabilities for North American customers who have departments or teams with French, Spanish or Brazilian Portuguese language requirements.

New Portal accelerators. The service update adds new Portal accelerators for Microsoft Dynamics CRM Online. These accelerators include Event Management, eService and Partner Relationship Management and allow organizations to extend their reach and extend the power of CRM to external constituents.

New developer tools and resources. The service update provides new developer tools in the updated Microsoft Dynamics CRM Software Development Kit (SDK) to enable the connection of Microsoft Dynamics CRM Online with other on-demand and on-premises applications and services such as Windows Azure. [Emphasis added.]

Microsoft Dynamics GP and Microsoft Dynamics CRM integration. A new built-in integration framework for Microsoft Dynamics GP and Microsoft Dynamics CRM enables customers to easily connect the two systems to help provide business insight and improve user productivity across financials, supply chain, and sales and service. The integration is supported with both on-premises and cloud versions of Microsoft Dynamics CRM for Microsoft Dynamics GP 2010 and Microsoft Dynamics GP 10. [Emphasis added.]

 

Derrick Harris asserts With SaaS, Microsoft Sweetens Its Azure Offering with TownHall in this 4/25/2010 abstract of his Why SaaS and Paas Could Equal Cloud Computing Gold article for GigaOm Pro (requires paid subscription):

Microsoft this week rolled out its CampaignReady suite of services, which is anchored by the Windows Azure-hosted TownHall. Designed for political campaigns, the suite works by letting candidates connect with constituents via TownHall, while Microsoft’s online collaboration and advertising tools help campaign workers communicate with each other and spread their messages. Especially for local or regional campaigns without the resources to build the specialized tools President Obama’s utilized, Microsoft’s pitch — a prepackaged solution that can be set up, torn down and paid for on demand — should be appealing. But Microsoft’s SaaS-plus-PaaS business model has legs beyond politics, and beyond Redmond.

As I describe in my column this week at GigaOM Pro, the combination of cloud services designed for and hosted on cloud platforms seems like a surefire strategy to secure PaaS (or even IaaS) adoption. By creating targeted applications designed specifically for use on their platforms, cloud providers can increase the likelihood of bringing customers into the fold (and can increase their profit margins) by letting applications help sell the platform instead of relying on the platform itself.

The possibilities are perhaps best exemplified by the number of Salesforce.com customers using its flagship CRM offering, which sits atop its Force.com platform — more than 72,000, according to the company. Presumably, it was positive experiences with the SaaS application that inspired 200,000-plus developers to build more than 135,000 custom applications that run on Force.com. It’s possible that Force.com could have attracted an equally large base as a standalone offering not intrinsically connected with Salesforce.com’s SaaS business, but unlikely.

The issue for most cloud providers is figuring out how to develop an application strategy to complement their infrastructural competencies. Microsoft, on the other hand, brought its decades of software experience with it when it launched Windows Azure. It developed its Pinpoint marketplace of third-party applications ready to run on the platform, it partnered with business-friendly ISVs like Intuit, and now it’s gotten into the SaaS act itself with TownHall. Azure has garnered its fair share of praise, and if Microsoft continues down the SaaS path, Azure could garner more than its fair share of customers and dollars. Read the full post here.

GigaOM Network will present its Structure conference on 6/23 and 6/24 in San Francisco.

See CloudBerryMan’s Manage Azure Blob Storage with CloudBerry Explorer post of 4/25/2010 to CIO.com’s Cloud Computing News blog in the Azure Blob, Table and Queue Services section.

Jim O’Neill’s Feeling @Home with Windows Azure, at Home post of 4/24/2010 provides additional information about Stanford University’s Folding@Home project:

@Home with Windows AzureI’ve spent a lot of time over the past few months working with and talking about Windows Azure to a variety of audiences, but if you’re anything like me, it can be hard to truly ‘get it’ until you touch it.  So to bring it all home (pun intended), my colleagues Brian Hitney, John McClelland, and I have been working on a pretty cool virtual event.

@Home with Windows Azure, is a two-hour webcast, which we’ll be repeating over the next couple of months, through which we’ll cover many of the core components of Windows Azure in a rather unique context; this is not your father’s “Hello World” experience!

The application we’ll be building leverages Stanford University’s Folding@home project.  Folding@home is a distributed computing application, where participants donate spare cycles of their own machines (at home, get it?) to running simulations of protein folding.

Protein folding refers the process by which proteins assemble themselves to perform specific functions, like acting as enzymes or antigens.  When things go wrong, diseases and conditions such as Alzheimer’s and cystic fibrosis can result, so understanding why things go wrong is critical to advancing treatment and finding cures.   Protein folding though is a very compute-intensive operation, and simulating just a nanosecond of folding activity can take as much as a day of CPU time!

That’s where Windows Azure comes in.  During these sessions we’ll walk through the steps to build a cloud application that will use Azure compute instances to contribute to the Folding@home project.   Registrants will get a no-strings-attached, two-week Azure account as part of the deal, so you’ll have a chance to build your own Azure application and contribute to medical science – how cool is that! …

Joannes Vermorel’s students in the "Génie logiciel et Cloud Computing" course of the ENS (Paris) have produced a “browser-based, minimalistic, massively multiplayer strategy game” called Sqwarea that runs in Windows Azure. From Sqwarea’s description on Codeplex:

This is a project for a browser-based, minimalistic, massively multiplayer strategy game.

It is done as a part of the "Génie logiciel et Cloud Computing" course of the ENS (Paris)

Rules and description of the game: Introduction
  • You are a King battling over a gigantic map to conquer the world. Train soldiers, conquer new territories, and resist the assault of other kingdoms.
  • This is a massive multiplayer online game: all players are on the same single map. This map is an infinite 2D square matrix.
  • The game is time-constrained, played over weeks (even months, or years) rather than minutes. Players are typically expected to play session of a few minutes once or twice a day.
Map

The world is flat (well known Middle Age fact), and looks like a squared grid. Each land square can be either neutral (in grey below) or part of a kingdom (colors below). Each kingdom has a King (in black below), that can be positioned on top of any land square of his kingdom:


Map example

Two cases are considered to be adjacent if they have a common edge: common vertices are not enough. The goal for each King is, of course, to spread his kingdom. …

Robin Morrisset continues with additional play details.

Windows Azure Infrastructure

Pedro HernandezMicrosoft Joulemeter: Using Software to Green the Data Center post of 4/25/2010 to the GigaOm blog is a Q&A session on Microsoft Research’s energy monitoring initiatives:

Exactly much power does it take to run a virtual machine or specific piece of software? That’s the answer the Joulemeter team at Microsoft Research is hoping to answer for IT managers.

Over the years there have been several innovations to reduce energy usage in the data center, from custom, low-power servers to non-traditional cooling approaches. But more recently, attention has been turning to one feature common to all IT infrastructures: software. Intel, for example, recently unveiled its Energy Checker SDK in a bid to help developers optimize their code for energy efficiency. Now Microsoft is getting in on the act with Joulemeter.

According to Jie Liu, a senior researcher at Microsoft Research Redmond, along with his fellow researchers Dr. Aman Kansal and Mr. Michel Goraczko, Joulemeter could have big implications for planning and monetizing virtual server environments and cloud infrastructures. In an email Q&A with Jiu, he answers some questions about Joulemeter and its green data center potential. …

Pedro continues with a “lightly edited” version of the Q&A.

Dom Green defines an Infrastructure Access Layer to abstract away your underlying [Azure or other cloud] infrastructure from your business logic or processing in this 4/25/2010 post:

The Infrastructure Access Layer, or IAL for short is a simple concept, nothing new, in-fact you have all been using something similar for years when programming against your databases. I have just expanded the concept to the cloud (and the ability to create “cloud ready” applications).

Infrastructure Access Layer (IAL)

image

The Infrastructure Access Layer is simply a way to abstract away your underlying infrastructure from your business logic or processing. This means that I can easily change how my applications interact with the infrastructure just by changing a method within the IAL and the business logic would be none the wiser.

Lets take a simple example, I may be developing an applications that takes work items and processes them. I create a method in the IAL called GetWorkItem which will return to my business logic the work item to be processed. This method can then call out to Azure Queues to get a work item, or maybe in the future you will want to switch out where we are getting the work item from, maybe we want to use a web service to call another location, use blob storage to store large work items or listen for a work item to be passed onto a service bus. This can all be dealt with in a single place.

image

All the business logic knows and cares about is that it is going to call the GetWorkItem method and get a work item back, not where it came from or how it received it it.

Cloud Ready…

Even with more and more people are moving toward the cloud, there are still companies that aren’t quite ready to make the jump, but realise that in the near future may have to and want to know how to create applications that will be able to transition easily to the cloud when they are ready.

By programming against the Infrastructure Access Layer companies will be able to make this move a lot smoother, by just replacing the needed classes so that they interact with the cloud rather than the old infrastructure.

A great example of this would be using a messaging system such as MSMQ as part of your on premise application, and then being able to easily switch this out for Windows Azure Queues or the App Fabric Service Bus when moving into the cloud. Having to make minimal if any changes to your business logic.

Keeping the cloud contained

As I talked about in my last post the cloud can end up getting everywhere and you can soon end up breaking the principle of DRY (Don’t  Repeat Yourself).

With the IAL you can easily keep the cloud contained, letting a minimal number of project have access to the cloud, making SDK upgrades and maintenance easier.

Is that it?

It is indeed. As I said at the start, the IAL is a simple concept that allows you to abstract away your underlying infrastructure from your business logic and also help create cloud ready applications for when people are ready to make that push to the cloud.

The RetirementJobs.com website reports that Microsoft is seeking a Director of Test, Cloud Computing as of 4/23/2010:

We are looking for a Director of Test, who is passionate about re-defining how services are engineered. Our vision is to deliver testing as a service - sharing mature ideas with Windows Azure developers and ISVs. We work in small agile teams taking big innovative bets. If you want to work on hard technical challenges, have a large impact on improving agility of engineering services, with personal growth path then this could be the position for you. [Emphasis added.]

Responsibilities

  • Leading a team of engineers to design and implement a testing model for a massively scalable and available system.
  • Define the platform quality bar and mechanisms for prevention and earlier detection of SLA issues.
  • Employ innovative techniques to detect potential perf, scalability, deployment, or operational (failure recoveries) regressions.
  • Develop end-to-end automation for engineering workflow - from developer desktop to production.
  • Implement development of shared test infrastructure for platform wide testing
  • Coach and develop team members.

Qualifications

  • BS or MS in Computer Science or equivalent.
  • 10+ year of experience leading a development or test of software platforms, preferably in server or cloud based systems.
  • Proven track record of technical accomplishment and high quality delivery.
  • Ability to build solid technical teams and mentor leads
  • Strong project management skills.
  • Knowledge of various testing techniques and approaches.
  • My take is that a retiree isn’t likely to be hired for this job. I wonder what the charge for “testing [Azure] as a service” will be.

    SnoBlaze introduces its blazeS+S™ Framework for Windows Azure in this detailed 4/2010 PowerPoint deck:

    • Reducing the overlap between Azure and implementing a S+S application …
    • Decreasing the cost of development for new S+S applications …
    • Shortening the time it takes to develop and deploy applications …

    blazeS+S™ is a framework providing a common architecture to facilitate the development of S+S applications in the Azure cloud.

    The framework consists of architectural design guidelines, class libraries, interfaces, development standards and a common way for implementing and integrating S+S services or extending existing services living in the Azure Cloud.

    What Is Blaze S+S?

      • Developed by SnoBlaze Corporation (SnoBlaze) in Q4 2009.
      • S+S framework that provides all the underlying infrastructure required to build a S+S application on the Azure Platform.
      • S+S Delivery Platform that contains components generally required as a service delivery infrastructure for any S+S oriented application.
      • Solution Accelerator reducing the burden on the development team in creating underlying “plumbing” code freeing the development team to focus more on building the components required to meet the business needs.
      • S+S software framework that helps the transition of an existing business application to the S+S model, or the building of a S+S-enabled application from scratch.

    The remaining slides disclose the details of SnoBlaze’s business plan for the blazeS+S framework.

    <Return to section navigation list> 

    Cloud Security and Governance

    Chris Hoff’s (@Beaker) The Four Horsemen Of the Virtualization (and Cloud) Security Apocalypse… post of 4/25/2010 relates:

    I just stumbled upon this YouTube video (link here, embedded below) interview I did right after my talk at Blackhat 2008 titled “The 4 Horsemen of the Virtualization Security Apocalypse (PDF)” [There's a better narrative to the PDF that explains the 4 Horsemen here.]

    I found it interesting because while it was rather “new” and interesting back then, if you ‘s/virtualization/cloud‘ especially from the perspective of heavily virtualized or cloud computing environments, it’s even more relevant today!  Virtualization and the abstraction it brings to network architecture, design and security makes for interesting challenges.  Not much has changed in two years, sadly.

    We need better networking, security and governance capabilities!

    Same as it ever was.

    Beaker concludes with links to his related articles.

    <Return to section navigation list> 

    Cloud Computing Events

    My Three Azure-Related Sessions at Microsoft’s Convergence 2010 Conference post of 4/25/2010 notes:

    MicrosoftDynamics’ Convergence 2010 conference being held this week (4/24 through 4/27/2010) includes a few cloud-related sessions:

    CSCRM22 Microsoft Dynamics CRM: Building Relationship Management Solutions in the Cloud

    • Monday, April 26 1:30 PM - 2:30 PM
    • Session Type: Concurrent Session
    • Skill Level: 300 – Experienced
    • Audience: Technical Decision Maker
    • Speaker(s): Girish Raja, Nikhil Hasija

    IDCRM22 Connecting in the Cloud

    • Monday, April 26 4:30 PM - 5:30 PM
    • Session Type: Interactive Discussion
    • Track: Microsoft Dynamics CRM
    • Skill Level: 300 – Experienced
    • Audience: Technical Decision Maker
    • Speaker(s): Craig Fleming, Dennis Ambrose, Heidi Tucker

    CSPP04 Leveraging Microsoft Azure to Extend Your Microsoft Dynamics ERP Solution

    • Tuesday, April 27 1:30 PM - 2:30 PM
    • Session Type: Concurrent Session
    • Track: Potpourri
    • Skill Level: 200 – Intermediate
    • Audience: Technical Decision Maker
    • Speaker(s): Ashvin Mathew, Ricky Gangsted-Rasmussen

    See the original post for session descriptions.

    Chris Kanaracus quotes Stephen Elop’s keynote at Convergence 2010 in this Microsoft Exec: We and Users Win With Cloud post of 4/25/2010 to PCWorld magazine:

    Microsoft is firmly on the cloud-computing bandwagon and with good reason -- it can make more money by doing so, even as it helps customers cut costs, business division head Stephen Elop said at the Convergence conference in Atlanta on Sunday.

    The company has spent billions so far to build out its cloud infrastructure for massive economies of scale, Elop said during a meeting with press and analysts. "What we're doing is taking cost savings and taking them back to the customer."

    Microsoft will win out as well, Elop contended. Microsoft is not only selling applications via the cloud, but raw computing power and a development platform with its Azure service.

    "What we've done [in the past] is sell software," Elop said. "In the cloud world we're still selling that same software but we're also participating in a bigger part of customers' IT budgets. We're going after more of the pot." Some 90 percent of Microsoft's engineering team will be working on cloud computing in some way within a couple of years, according to Elop.

    Convergence is Microsoft's conference for its Dynamics ERP (enterprise resource planning) and CRM (customer relationship management) applications. It provides the main platform for company executives to speak directly to users of the software, as Dynamics products are mostly sold by partners.

    Cloud computing places Microsoft and its customers "at the center of a remarkable transition in and around everything we do with technology," Elop said during a keynote address earlier Sunday. However, "the most important message is that we are committed to partnering with you through this period of generational change," he added.

    Elop's keynote focused on how Microsoft has worked to integrate Dynamics software with other parts of its portfolio, such as Office and Unified Communications. The result is a cohesive stack that will save customers money and time, with the whole benefit "greater than the sum of the parts," he said. …

    Bill Zack’s Public Clouds: Clear or Murky? post of 4/24/2010 reviews Cloud Computing Expo New York 2010:

    This week I had the privilege of attending the Cloud Computing Expo in New York at the Jacob Javits Center.

    image

    I was very impressed with how the attendance and interest had increased from last years' event. There were scores of vendors there hawking everything from Public and Private Cloud offerings to management tools and services to in fact anything that could remotely be branded (or rebranded) "Cloud".  It reminded me of the early days of SOA when every vendor was rushing to rebrand whatever they had as "SOA".

    But I do sympathize with the attendees who were there to learn and understand what the Cloud is all about and how they can leverage the new paradigm of cloud computing in their businesses.  Many of the keynotes were thinly disguised (and in some case totally undisguised) sales pitches for one vendor’s product or another.  Some of the breakout session were less sales pitchey but … too many of them were.

    First a disclaimer: What follow[s] is strictly JMHO (Just my humble opinion) and not the position of Microsoft, my employer.  But it is based on my observations as an architect and a fair amount of sympathy for the attendees. It is an attempt to look at the conference from their point of view. To be clear, I am a Microsoft Architect Evangelist and a strong believer in the Microsoft Cloud and in our hybrid (on-premises + cloud) based approach to supporting our customers.

    At this event I also had the privilege of presenting the session on Cloud Design Patterns and I do pride myself that, although we used Azure to illustrate the patterns, the beauty of these patterns is that they can be applied to building applications on almost any vendor's cloud platform.

    One thing that seemed abundantly clear to me: A lot of the vendors represented there were new startups.  By next year I fully expect that 60-70% of them will be gone having merged, been acquired or failed. As a customer, therefore, I would be hesitant to put my trust in a company that might not be around a year from now.  So when evaluating public cloud providers (which is the focus of this post) I would limit my consideration to what I call the Big 4: Amazon, Microsoft, Force, and Google. I realize that there are others (like RackSpace) that will be offended by not being included and that will definitely be around in the future, but I had to draw the line somewhere. :-) …

    Bill continues with his review and concludes with descriptions of the Big 4’s offerings.

    Claude (the Cloud Architect) reminds about Cloud Asia 2010 - Singapore May 3-7, 2010 in this 4/23/2010 post:

    … In a short span, CloudAsia has become an exciting major annual event for the region, bringing together international researchers,practitioners and industry players in Grid and Cloud Computing as well as related technologies and applications.

    CloudAsia aspires to be an important platform to enable attendees and participants to share experiences, discuss their research findings, network and explore opportunities for collaboration. Delegates are expected to come from local and overseas academia, research institutes and laboratories, both in the public and industry sectors (such as biomedical, digital media,physical sciences and manufacturing).

    For More Information:
    http://cloudasia.ngp.org.sg/2010/main.php

    Andy Patrizio asks How Will Data Centers Change to Fit SaaS Needs? in this 4/22/2010 post to the EarthWeb.com blog about the AlwaysOn OnDemand conference:

    The traditional data center will have to evolve as the world adopts more of the software-as-a-service (SaaS) model, said panelists here at the AlwaysOn OnDemand conference, hosted by Hewlett-Packard at its headquarters.

    The issue posed to the four panelists by David Thomas, executive director of TechAmerica Silicon Valley and one of the first SaaS founders when he created Intacct in 1999, was to define what the data center is today and what IT will be advocating in the future..

    "It comes down to where does the industry find the best economic benefit. We're not looking at the ultimate victory in adopting either an Amazon [outsourced] model or traditional model, but where the balance lies," said James Urquhart, market strategist and technology evangelist for cloud computing at Cisco Systems (NASDAQ: CSCO).

    Not only will there be a future balance of internal IT and outsourcing, there will also be a balance of vendors. "I don't believe you see any evidence that everyone will pick a company and go 100 percent with them," said Urquhart.

    Doug Merritt, executive vice president of on-demand solutions at SAP, agreed. "The segregation of layers in computing has always existed and there are even more layers now with an increase in requirements," he said.

    A single stack player that attempts to provide everything never does well, Merritt noted, and you have best of breed players who emerge. "There will be a need for a super dynamic stack for the next decade because the computing need is so strong. We need a shot in the arm after a decade of muddling around," he said.

    And those players, notoriously bad at playing nice with each other, had better learn to interoperate, the panel said, because future generations are growing up with easy interoperation of things like smartphones and videogame consoles and other consumer devices.

    The Windows Azure Bootcamp Team announced on 4/25/2010 a two-day Azure BootCamp on 5/13 and 5/14/2010 at 8:00 AM to 5:00 PM in St. Louis, MO:

    Address
    1 City Place Dr
    City Place Auditorium
    St. Louis, MO 63141
    Trainers
    Special Notes
    • Snacks will be provided. Attendees must provide their own lunch.
    • See our What to Bring page to see what you need to have on your computer when you bring it.

    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Lucia Moses reports Newsweek.com Explores Amazon Cloud Computing in this 4/25/2010 article for MediaWeek:

    mw/photos/stylus/136548-NewsweekcomM.jpg

    Newsweek, under a cloud, is going to the cloud. The site is outsourcing its Web site hosting duties to Amazon, joining a small but growing number of companies experimenting with cloud computing.
    Until now, Newsweek.com had been hosted by its parent company, The Washington Post Co. The media company has been trying to cut losses at its magazine division, which recorded $29.3 million in operating losses in 2009. By joining the cloud, Newsweek expects to save close to $500,000 annually.

    “It saves Newsweek money,” said Geoff Reiss, vp, general manager, Newsweek Digital. “Lots of people out there built their own infrastructure and are going to be tortured by this idea of sunk costs.”

    That’s not the only revamp Newsweek.com is making. This week it’s expected to unveil a redesign that eschews the big branding statements and oversized ad units that are standard on many media sites. On the new site, the Newsweek name will shrink and a significant banner ad position goes away.

    In its place, the title is adopting a stripped-down design that Facebook users will find familiar. It’s anchored by a newsfeed that gives equal weight to postings—regardless of whether they’re blogs, columns or news. Newsweekopedia, a branded search feature introduced last year, will be replaced with the brand-neutral Topic Finder. And in another sign it’s not business as usual, Newsweek.com hired ex-Gawker editor Gabe Snyder as executive editor.

    Liz McMillan reports “Canonical Says It's Adding 200 Per Day” in her Ubuntu Enterprise Cloud Claims 12,000 Deployments post of 4/24/2010:

    Don't rule out Linux for the Cloud. For example, it's been reported that Canonical COO Matt Asay recently said, "We're now tracking 12,000 active deployments of UEC and we're adding 200 every single day. These are active installations. These aren't people who downloaded it once and kicked the tires for a few days then dropped it."

    There's always the pesky payment problem, and Asay was quoted as saying that Canonical wants to be "more aggressive" in converting free installations to paid.

    These remarks came at the end of a week that saw the world's largest Cloud Computing event ever held, Cloud Expo at the Javits Center in New York, which attracted some 5,000 delegates and more than 100 exhibitors. …

    <Return to section navigation list> 

    blog comments powered by Disqus