Tuesday, September 08, 2009

Windows Azure and Cloud Computing Posts for 9/3/2009+

Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.

•• Update 9/7/2009: Extensible Azure Projects, Cloud Architecture day at Microsoft London
• Update 9/6/2009: Added LLBLGen Pro and SQL Azure, more VMWorld post mortems, Ethan Zuckerman at Ars Electronica and more; fixed E. Huna’s .netTiers link.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use these links, first click the post title to display the single article you want to navigate.

Azure Blob, Table and Queue Services

No significant new articles as of 9/5/2009, 4:30 PM PDT. Maybe some posts on Azure Storage Services will show up tomorrow.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

•• My illustrated Using the SQL Azure Migration Wizard with the AdventureWorksLT2008 Sample Database post of 9/7/2009 describes how to use George Huey’s schema migration utility to duplicate schemas of on-premises SQL Server databases in SQL Azure Database tables in Microsoft Data Centers and the problems I encountered when using the Wizard with the AdventureWorksLT2008 sample database.

Frans Bouma reports LLBLGen Pro and SQL Azure are compatible in this 9/5/2009 post:

LLBLGen Pro works with SQL Azure, that is, the generated code and the runtime library. There are a couple of things you should be aware of, and I'll enlist them briefly below. The thing which doesn't work is creating a project from a SQL Azure database, as SQL Azure has no meta-data tables publicly available to the connected user (also a reason why for example SQL Server Management Studio doesn't work with SQL Azure at the moment).

Frans also provides a list of “things to be aware of are the following when you want to work with SQL Azure and LLBLGen Pro.”

Emmanual Huna says he Tested .netTiers with SQL Azure – it just works in this 9/5/2009 post:

Here's what I've tried so far:

  • Using .netTiers inside the Azure "Development Fabric" and connecting to a local on-premise SQL Server.
  • Using .netTiers inside the Azure "Development Fabric" and connecting to a SQL Azure database in the cloud.
  • Using .netTiers in a web role in the Azure Cloud connecting to a SQL Azure database in the cloud.

All I had to do to make the above work is to change my connection string - no need to change code!  Good times!

His earlier Cleaning up scripts for SQL Azure explains how he made his T-SQL DDL scripts compatible with SADB.

Jeff Currier expands on Patric’s reply in his SQL Azure and limits post of 9/4/2009:

Quite a few folks have been asking recently about the limits we have in SQL Azure and I thought I would take some time to chat about this here.  The question varies of course by the person asking but the question inevitably goes something the following, “Hey, why don’t you guys let me have a database bigger then 10 gigs… Are you guys ever going to support big workloads or large instances?”

The short answer is yes but the truth is almost always more complicated than simple answers like this.  Frankly, while we could increase the size here it’s important to think about what raising that limit means.  It means that if we experience a failover event (hard drive dies, node reboots, you get the picture) that it may take longer to get your database up and going on another node as data may need to be caught up from other nodes on the network.  This translates into reduced availability for you in the long run and this something we try really hard to avoid.

Secondly, what about the I/O?  If your databases are getting that large then might it not be more advantageous to shard?  We’ve observed that in many workloads it does.  Granted, this may not make sense in all cases but it does in many so in a sense there is a bit of tension between the need for larger individual user db’s and enhanced support for things like sharding. …

Patric McElroy explains the primary reasons for SQL Azure’s 10-GB data size limitation in the Will the 10Gb Limit Increase? thread of the SQL Azure — Getting Started forum:

One of the primary drivers for the DB size limits is customer feedback.  Customers are planning to initially and primarily move web and departmental applications to a software services model.  Over time they will move bigger and more varied types of apps but the above are expected to be a primary source for the initial apps to move. 

There are also some technical reasons for a limitation on the DB size.  SQL Azure runs on a shared infrastructure, commodity hardware environment, providing automatic HA and fault-tolerance for 10,000s of databases.  Using commodity h/w helps us deliver an economical service.  In order to provide the HA/fault-tolerance, we keep multiple copies of each users DBs (on seperate failure domains).  When a machine goes up in flames (they are commodity after all) the system will self-heal by re-building the lost replicas on other machines in the cluster.  10 GB is a reasonable balance between the needs of the initial cloud applications and the time/space limitations in copying massive amounts of data across the data center.  We have 100s of replicas on any given node and the system must remain available and healthy - so that your apps keep on running - even if multiple machines fail.

One additional thing to note is the additional performance benefit of being able to partition horizontally across a shared infrastructure environment.  A large (100s of TBs) internal service property has been working with the SQL Azure DB platform for over a year and this past summer went live with the service.  They were previously running on specialized DB hardware backed by a SAN.  Once they moved to the SQL Azure DB platform they were able to partition their data and query workload across 100s of machines all running in parallel rather than a few really big machines running more serially.  The results for their longest queries was quite dramatic.  Queries that used to take 10s of minutes returned in < 30 seconds.  The difference was simply due to the fact that running dozens of independent queries in parallel on dedicated CPUs/disks is much faster than even high-end dedicated h/w for many applications.  This was a huge win for them and for their customers.

Is the mysterious “large internal service property” HealthVault? I doubt that HealthVault has garnered enough users to consume “100s of TBs” of storage.

<Return to section navigation list> 

.NET Services: Access Control, Service Bus and Workflow

Eugenio Pace takes the next step in Exploring the Service Provider track – Fabrikam Shipping Part II (Solution) of 9/3/2009:

What we want now, is Fabrikam to be claims aware and trust claims issued by Adatum. Claims issued by Adatum will be used for authentication and authorization. We also want to map Adatum internal roles to Fabrikam’s for authorization purposes: who will be a “Shipment Creator”? Who will be an “Administrator”?

This diagram shows Fabrikam Shipping today if used by Adatum (no claims, no federation):

image

What we want now, is Fabrikam to be claims aware and trust claims issued by Adatum. Claims issued by Adatum will be used for authentication and authorization. We also want to map Adatum internal roles to Fabrikam’s for authorization purposes: who will be a “Shipment Creator”? Who will be an “Administrator”? 

The .NET Services Team says  on 9/3/2009 that there will be .NET Services Scheduled Maintenance (September 8th, 2009) from 8:00 AM to 2:00 PM (6 hours). This means .NET services uptime for September won’t exceed 99.17% (6/720) if it takes all 6 hours for “[r]outine maintenance on the storage layer of the Access Control Service:”

Customers can expect intermittent timeouts on Access Control operations during this maintenance window. This will also impact ServiceBus and Portal.

Eugenio Pace continues his series about patterns and practices for the Windows Identity Framework with Exploring the Service Provider track – First station: Fabrikam Shipping – Part I (the scenario & challenges) of 9/1/2009:

Fabrikam is a company that provides shipping services. As part of their offering, they have an application (Fabrikam ShippingFS) that allows its customers to create new shipping orders, track them, etc. Fabrikam Shipping is delivered as a service and runs in Fabrikam’s datacenter. Fabrikam Customers use a browser to access it.

FS is a fairly standard .NET web application: the web site is based on ASP.NET 3.5, the backend is SQL Server, etc. In the current version, users are required to authenticate using (guess what): username and password!!

Fabrikam uses ASP.NET standard providers for authentication (Membership), authorization (Roles provider) and personalization (Profile).

Fabrikam Shipping is also a multi-tenant application: the same instance of the app is used by many customers. [Emphasis Eugenio’s.]

<Return to section navigation list> 

Live Windows Azure Apps, Tools and Test Harnesses

•• My illustrated Using the SQL Azure Migration Wizard with the AdventureWorksLT2008 Sample Database post of 9/7/2009 describes how to use George Huey’s schema migration utility to duplicate schemas of on-premises SQL Server databases in SQL Azure Database tables in Microsoft Data Centers and the problems I encountered when using the Wizard with the AdventureWorksLT2008 sample database. [Repeated from SQL Azure Database (SADB).]

•• Magnus Mårtensson’s Extensible Windows Azure projects using MEF post of 9/7/2009 explains how to extend Azure project and contains links to earlier testability and persistence ignorance projects:

Here is how to enable a rich extensibility model for Windows Azure projects and how to run create jobs on Windows Azure Storage only once in your Windows Azure Projects. This sample and related AzureContrib release leverages Managed Extensibility Framework (MEF) – an upcoming .NET Framework component in .NET Framework 4.0.

The[re have] been three releases of AzureContrib, each one aimed to make the basic Windows Azure project template a bit more rich and intelligent.

The new release to AzureContrib adds a couple of important services (AzureContrib.ServiceHosting.ServiceRuntime.Services); the IWorkService And the IOneTimeWorkService. Also it adds a bit more intelligence to a Windows Azure Page, UserControl and most importantly to the Windows Azure WorkerRole.

•• Here are links to 23 videos about Microsoft HealthVault.

Steve Marx offers the Transcript from Today’s Windows Azure Lounge Chat in his 9/4/2009 post:

This afternoon I spent three hours hanging out in the Windows Azure Lounge.  With a longer period of time to chat and less hype, we had a more relaxed and social chat than the first time.  I like that, and that’s part of the reason I want to do these more regularly.

(Interesting stat: this chat had 530 messages over three hours.  The first chat had 572 messages in just one hour!)

As came up again in this chat, time zones are a pain, so I vow to do one of these at night (for me) so the rest of the world has a chance to hang out during normal waking hours.

For those who missed out on the fun (or were there and want to relive it!), the transcript is available:

The Oxite Team describes the The Azure-Oxite Sample application in this 9/2/2009 page:

This is a simple blog engine written using ASP.NET MVC, and is designed with two main goals:

  1. To provide a sample of 'core blog functionality' in a reusable fashion.  Blogs are simple and well understood by many developers, but the set of basic functions that a blog needs to implement (trackbacks, rss, comments, etc.) are fairly complex. Hopefully this code helps.
  2. To provide a real-world sample written using ASP.NET MVC.

We aren't a sample-building team (more on what we are in a bit).  We couldn't sit down and build this code base just to give out to folks, so this code is also the foundation of a real project of ours, MIX Online.  We also created this project to be the base of our own personal blogs as well; check out this page on CodePlex to see a list of sites that use Oxite (and use the comments area to tell us about your site). …

Oxite is a project built by the team behind Channel 9 (and Channel 8, Channel 10, TechNet Edge, Mix Online): Erik Porter, Nathan Heskew, Mike Sampson and Duncan Mackenzie.  You can find out more about our team and our projects in our various posts and videos on Channel 9.

<Return to section navigation list> 

Windows Azure Infrastructure

•• Anthony Ha asks Is it time for businesses to embrace the cloud? in this 9/7/2009 post to Venture Beat’s “Conversations on Innovation” section that’s sponsored by Microsoft:

This is part of a series of posts about cutting-edge areas of innovation. The series is sponsored by Microsoft. Microsoft authors will participate, as will VentureBeat writers and outside experts.

Cloud computing has become a magic phrase over the last year or so. Everyone agrees it’s a hot trend, but people are still arguing about what it is, and who should be using it.

Over next few days, we’ll look at the cloud as part our Conversations on Innovation series (sponsored by Microsoft). The big question we’re tackling: What needs to come together, such as policy standards and programming models, to reach the cloud’s true potential?

To kick things off, I’ll look at where cloud computing stands, and the challenges it faces. …

Treff Laplante continues his Cloud Computing Adoption series with Cloud Computing Adoption - Part 2 of 5 of 9/5/2009, originally posted at The Central Penn Business Journal Gadget Cube:

Historically, when we take something complex and make it simple, we open up all sorts of opportunities for value. Think about the changes that happened once the Web made it simpler to buy goods and services. Consider how mobile phones and text messaging have empowered us to communicate faster and more frequently. And consider what the word processor, e-mail and spreadsheets have done for individual productivity.

Cloud computing is a lot like each of these three revolutions in that it greatly reduces the complexity of otherwise technically challenging issues. In so doing, it empowers a much larger group of individuals to address those issues.

In a cloud environment, time and money will no longer be spent performing routine administrative tasks, writing complex systems or networking code, which, though necessary, didn't directly bring value. Instead, that same time will be devoted to value-added tasks, such as analyzing business processes, building customized software functionality or integrating with powerful third-party Web services.

James Hamilton explains that Successfully Challenging the Server Tax means designing your own server systems with commodity hardware, such as Microsoft’s datacenters. James notes on 9/3/2009 that:

BackBlaze, a client compute backup company, just took another very innovative swipe at destroying the server tax on storage. Their work shows how to bring the “inexpensive” back to RAID storage arrays and delivers storage at $81/TB. Many services are building secret, storage subsystems that deliver super reliable storage at very low cost. What makes the BackBlaze work unique is they have published the details on how they built the equipment. It’s really very nice engineering. …

Corey Doctorow’s Not every cloud has a silver lining article of 9/3/2009 for the [Manchester] Guardian carries this not-surprising claim: “There's something you won't see mentioned by too many advocates of cloud computing – the main attraction is making money from you:”

The tech press is full of people who want to tell you how completely awesome life is going to be when everything moves to "the cloud" – that is, when all your important storage, processing and other needs are handled by vast, professionally managed data-centres.

Here's something you won't see mentioned, though: the main attraction of the cloud to investors and entrepreneurs is the idea of making money from you, on a recurring, perpetual basis, for something you currently get for a flat rate or for free without having to give up the money or privacy that cloud companies hope to leverage into fortunes. …

David Linthicum asks Will Cost Savings Continue to Be a Significant Driver for Cloud Computing? in this 9/2/2009 post subtitled “You have to consider the cost holistically with other factors:”

Yes, but it's not the only driver. There can be substantial cost benefits when leveraging cloud computing but, as we pointed out, your mileage may vary. You have to consider the cost holistically with other factors, including strategic benefits that are typically harder to define but are there nonetheless.

It's easy to determine that cloud computing is less expensive than traditional on-premise computing by simply considering the operating expenses. The real benefit of cloud computing (or more specifically, SOA using cloud computing) is the less-than-obvious value it brings to an enterprise, including:

  • The benefit of scaling.
  • The benefit of agility.

Gordon Haff makes Ten observations about cloud computing in this 8/11/2009 post to CNet News’ The Pervasive Data Center blog:

I started following and writing about topics like Amazon Web Services and mashups even before they were corralled under the "cloud computing" moniker. But today, cloud computing is one of the hottest topics in IT.

Much of what I write about the cloud drills down on particular aspects or is a reaction to some vendor's announcement. Here I'm going to take a different approach and take a broader look at where things stand today and some of the challenges ahead.

Thanks to James Urquhart for the heads-up to this article, which I missed when posted, in his Virtualization and the cloud: Tech, talk to converge post of 9/2/2009.

<Return to section navigation list> 

Cloud Security, Standards and Governance

William Vambenepe’s Separating model from protocol in Cloud APIs post of 9/4/2009 complains that proposed cloud APIs tackle protocol concerns alongside the resource model:

What happened to the separation between the model and the protocol in management APIs? For all the arguments we had in the design of WSDM and WS-Management, this was one fundamental concept that took little discussion before everyone agreed: that the protocol (the interaction model and the on-the-wire shape of the messages used) should be defined in a way that is agnostic to the type of resource being managed (computers, elevators or toasters — the perennial silly example). To this end, WSDM took pains to release MUWS (Management Using Web Services) and MOWS (Management Of Web Services) as two different specifications. …

An excellent read. Be sure to check out the comment from Sam Johnston; he calls the post “entralling.”

Chris Hoff (@Beaker) asks Variety & Darwinism In Solutions Is Innovation, In Standards It’s A War? in this 9/5/2009 post:

I find it quite interesting that in the last few months or so, as Cloud has emerged as a full-fledged business opportunity, we’ve seen the rise of many new companies, strategies and technologies. For the most part, hype aside, people praise this as innovation and describe it as a natural evolutionary process.

Strangely enough, with the emergence of new opportunity comes the ever-present push to standards.  Many see standards introduced too early as an innovation squasher; it inhibits free market evolution, crams down the smaller players, and lets the big fish take over — especially when the standards are backed by said big fish.  The open versus proprietary debate is downright religious.

Cloud Computing is no different.

Reuven Cohen’s One Cloud Standard to Rule them All post of 9/5/2009 begins:

Lots of discussion recently on the the topic of Cloud standards and a potential Cloud standards war emerging. Thought I'd give you a quick run down.

In an article written by Tom Nolle for Internet Evolution he asks if Multiple Standards Could Spoil Cloud Computing. In the post he says " too many standards are worse than no standards at all, because these efforts can stifle innovation and even implementation. In the case of cloud computing, there’s also the big question of whether standards being pushed for private clouds will end up contaminating the successful Internet model of cloud computing."

Krishnan Subramanian maps The Road To Open Federated Clouds: Xen, VMware And More in his 6/4/2009 post:

VMworld 2009 is over and the battle lines are already drawn between Citrix Xen, Vmware and Redhat. Xen is the leader in the public cloud service provider side and VMware holds near monopoly hold on the enterprise infrastructure side. Before we see full scale cloud adoption on the enterprise side, it is important that these technologies interoperate with one another.

This week saw major announcements from all three virtualization players that could eventually lead to an open federated cloud ecosystem. But it is just a start and these efforts should go well beyond the soundbites of the occasion. It is a long road ahead but it is important that the process is kickstarted sooner than later. Let us recap the events that unfolded this week and try to understand it from the framework of open federated cloud ecosystem. …

Reuven Cohen looks into the challenge of federated clouds in his The United Federation of Cloud Providers post of 9/4/2009 and says “There are a number of organizations looking into solving the problem of cloud federation:”

A fundamental challenge in creating and managing a globally decentralized cloud computing environment is that of maintaining consistent connectivity between various untrusted components that are capable of self-organization while remaining fault tolerant. In the next few years the a key opportunity for the emerging cloud industry will be on defining a federated cloud ecosystem by connecting multiple cloud computing providers using an agreeing upon standard or interface. In this post I will examine some of work being done in cloud federation ranging from adaptive authentication to modern P2P botnets.

<Return to section navigation list> 

Cloud Computing Events

Mike Taulty recommends the Architect Forum. Cloud: An Architectural View event taking place 9/25/2009 at Microsoft Cardinal Place, London:

Change is the one constant in IT and today is no exception. In a time when economic necessities dictate that we do more with less, faster and cheaper than ever before we are still seeing projects fail at an alarming rate. The not so new buzz is cloud computing that the analysts are falling over themselves to convince us is the next big thing. Well there is no doubt that it is becoming ever more tangible as the main vendors like Microsoft seek to ready their propositions in the cloud space. Since its announcement at PDC08, Windows Azure has set itself apart from the infrastructure as a service crowd offering a full compute platform capability as a service. With another PDC due this year that will herald the launch of Azure with increased features and services, pricing and SLAs we will see the cloud become ever more real! Undoubtedly this is the platform of choice for start-ups and ISVs but is it ready for Enterprise time? What are the opportunities and barriers for forward thinking organisations, is it too early to take to the skies? What is the architecture of the enterprise going to look like? Is it all about private/public clouds, virtualised infrastructures? Or are these just the vestiges of an already overloaded and constrained architecture? Will the cloud really allows us to break up the silos and truly realise service oriented dream? Time will tell

When: 9/25/2009 8:45 AM to 4:30 PM GMT  
Where: Microsoft Cardinal Place, Auditorium 2, 100 Victoria Street, Cardinal Place, London SW1E 5JL, United Kingdom

• Ethan Zuckerman describes his “Mapping the Cloud” presentation to Ars Electronica’s symposium on The Cloud in his The Cloud, and useful illusions post of 9/5/2009:

My talk was mostly abridged from a long talk I gave at the Berkman Center earlier this year, called Mapping a Connected World. There’s slides and a partial bibliography for that talk, and links to audio and video here. For folks who loved the mapping of airline routes, there’s a link to that project and lots more in that vein on this post on infrastructure and flow.

David Sasaki and Isaac Mao asked us all to produce artist statements for the catalog. Mine is online here, …

Ethan’s David Sasaki maps the brave new world of the cloud post of the same date begins:

We start the seminar at Ars Electronica on cloud computing in darkness, which David Sasaki reminds us, is the state of nature. We think of electric lights as pervasive, almost too cheap to meter. We don’t expect to pay to plug in our laptops. But it would be a mistake to assume that this infrastructure is universal – he shows us students in Monrovia, sitting outside the airport to study by electric lights. …

• Gregg Ness’ Welcome to the IT Revolution post of 9/4/2009 covers the first meeting of the Infrastructure 2.0 Working Group at Stanford Research Institute (SRI) on September 3, 2009:

The Infrastructure 2.0 Working Group held its first meeting at the request of three networking legends, Dan Lynch (the founder of Interop), Vint Cerf (one of the father’s of the Internet) and Bob Grossman (one of the fathers of cloud computing).  Its mission: to transform the economics of IT by driving the development of new capabilities into the network that will unleash the power of virtualization and cloud computing.  (That’s the best I can do until the next invitation-only meeting.)  You can access the agenda via the link.

• Dana Gardner claims Cloud Computing Summits Take the Trend Beyond the Hype in this 9/6/2009 post to Seeking Alpha:

Three industry conferences this week -- one underlying theme: enterprise cloud computing.

If you could sum up VMworld 2009, the Red Hat Summit and JBoss World with one uber topic, cloud takes it -- which begs whether the cloud hype curve has yet peaked.

Or more compelling yet, is the interest in cloud models more than just hype, more than a knee-jerk reaction to selling IT wares in a recession, more than an evolutionary step in the progression of networked computing?

Although the slew of announcements coming out of San Francisco and Chicago this week wasn’t solely focused on the cloud, the pattern is unmistakable and could cause naysayers to think again.

• Greg Schult’s I/O, I/O, Its off to Virtual Work and VMworld I Go (or went) post of 9/5/2009 analyzes I/O virtualization and networking convergence at VMWorld:

… Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched. …

Gregg is the author of Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures (Elsevier) and The Green and Virtual Data Center (CRC).

Kevin Jackson reports NCOIC Holding Full-Day Cloud Computing Workshop on 9/4/2009:

The Network Centric Operations Industry Consortium will be holding an all day Cloud Computing Workshop on September 21, 2009 in Fairfax, VA. Open to the public, this workshop will focus on Net-Centric Standards and Best Practices for Cloud Storefronts and Cloud Computing Support for Tactical Networks. Invited speakers include:

The NCOIC is a unique collaboration of premier leaders in the aerospace, defense, information technology, large-scale integrator and services industries. The Consortium works in tandem with customers from around the world, each with a specific mission, to provide a set of tools that enable the development of network centric capabilities and products. …

When: 9/21/2009   
Where: Hyatt Fair Lakes hotel, Fairfax, VA, USA

Keith Ward wrote the following VMWorld-centric articles for 1105 Media’s Virtualization Review newsletter:

Andreas Grabner’s VMWorld 2009 – vCloud and Performance Monitoring reports on Day 3 of VMWorld on 9/3/2009:

It is Day 3 at VMWorld 2009 and the “promised” announcements during the yesterday’s keynote finally hit the wire. 1000+ Service Providers – including AT&T, Verizon, Savvis, Terremark – are going to offer Cloud Services based on VMWare’s Cloud OS – read the full press release here: http://www.vmware.com/company/news/releases/vcloud-express-vmworld09.html. …

Randy Bias contends VMware’s vCloud API Forces Cloud Standards in this 9/2/2008 post:

We’re in the midst of a monumental transformation of the IT space, namely cloud computing, and the transformation is stalled.  Or, it was, until today when VMware released their vCloud API at VMworld under an extremely permissive license.  A FAQ is here. So what’s the big deal you say? …

Andreas Grabner continues his series with Live from VMWorld 2009 – Day 2 – VMWare’s Cloud Operating System of 9/2/2009:

Tod Nielsen and Paul Maritz hosted the Keynote today at VMWorld 2009 speaking to 12.488 attendees.

Maritz painted the history of virtualization and presented the idea of the Virtual Datacenter where VMWare allows you to manage your virtual datacenter – regardless whether deployed internally on your own infrastructure or externally on the infrastructure of a Cloud Service Provider.

The foundation of this is the vSphere 4 Cloud OS allowing you to easily manage and move your deployed applications between your managed virtual data centers. Over 1000 Service Providers world wide have adopted VMWare technology in their data centers to provide it to consumers. VMWare’s Chargeback product will help with charging consumed computational power and resources to the end-user or to business units hosting their applications on the internal cloud. …

Andreas Grabner reports Live from VMWorld 2009 – Day 1 for Dynatrace on 9/2/2009:

The Moscone Center in San Francisco opened the gates for VMWorld 2009. Over the next couple of days the attendees, partners and the press will hear the news about upcoming trends and challenges in virtualization and how VMWare (and their user base) is going to face them. The first announcement today was about VMWare Go - a new service that promises to make first user experience with virtualization easier as it is now. Additional announcements about new products and new service partners (especially in cloud computing) will follow over the next days. …

Microsoft’s Professional Developer Conference announces additional Azure-related sessions for PDC09 as of 9/5/2009:

SQL Azure sessions:

Windows Azure Platform sessions:

It’s obvious that PDC09 could use a few more SQL Azure sessions.

When: 11/17-19/2009   
Where: Los Angeles Convention Center, Los Angeles, CA, USA

Vittorio Bertocci announced in his More PDC09 Identity Awesomeness post that Kim Cameron will present:

Software + Services Identity Roadmap Update -- Kim Cameron

At PDC 2008, Microsoft unveiled a comprehensive offering of identity software and services, based on the industry standard claims-based architecture, and designed to address the rapidly growing requirements of modern applications both on-premises and cloud. In this session, we will demonstrate the progress we’ve made using real life use cases, discuss lessons learned in adoption of claims based identity, and preview new scenarios and capabilities of the evolving identity software + services portfolio.

and Vibro will be presenting at PDC09, too, promising “More details in the next weeks…”

Reuven Cohen’s Announcing The Global Governmental Cloud Computing Roundtable post of 9/4/2009 reports:

I am happy to announce my involvement as both instigator and moderator in an upcoming roundtable discussion on Global Governmental Cloud Computing coordinated by Foreign Affairs and International Trade Canada (DFAIT) and GTEC 2009 on October 6th in Ottawa, Canada.

The purpose of this by invitation meeting is to provide an international forum for leading government CIOs and CTOs to discuss the opportunities and challenges of implementing cloud computing solutions in the public sector. We expect a total of 20 to 25 leading international government representatives to participate in the discussions. …

Participation is by invitation only, although if you are involved in a senior governmental IT role / organization and are interested in being included, you are encouraged to get in touch by September 18, 2009.

When: 10/6/2009   
Where: Ottawa, Ontario, Canada (invitation only) 

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

•• Ryan Howard’s Practice Fusion Announces Investment from Salesforce.com and Cloud Computing Initiative press release of 9/7/2009 claims:

Practice Fusion offers a revolutionary application and delivery model – cloud computing – enabling physician practices to deliver superior care to their patients. Practice Fusion provides free, web-based electronic medical records (EMR), practice management, patient scheduling and more.

Practice Fusion is launching its patient health record on Force.com, salesforce.com’s enterprise cloud computing platform. Force.com provides everything companies need to quickly build and deliver business applications in the cloud, including the database, unlimited real-time customization, powerful analytics, real-time workflow and approvals, programmable cloud logic, integration, real-time mobile deployment, programmable user interface and Web site capabilities. Applications built on Force.com benefit from the proven security, reliability and scalability of salesforce.com’s real-time global service infrastructure.

Ryan Howard is CEO of Practice Fusion.

Free Personal Health Record management applications, such as HealthVault and PassportMD (free to Medicare recipients), are common but not EMR and practice management (PM) software for physicians.

•• Alan Williamson reports in Amazon SimpleDB + SQS : Simple Java POJO Access of 9/6/2009 that he has updated his two Java classes that let you access Amazon Web Services’ Simple DB and Simple Queue Services (SQS):

SimpleDB features

  • No external dependencies
  • Single POJO
  • Full API support; CreateDomain, DeleteDomain, DomainMetaData, select, GetAttributes, PutAttributes, BatchPutAttributes, DeleteAttributes
  • NextToken support
  • Signature2 authentication
  • Error Reporting
  • Last Request ID and BoxUsage reporting

SimpleSQS features

  • No external dependencies
  • Single POJO
  • Full API support; CreateQueue, DeleteQueue, ListQueues, DeleteMessage, SendMessage, ReceiveMessage, GetAttributes, ChangeMessageVisibility, AddPermission, RemovePermission
  • Signature2 authentication
  • Error Reporting
  • Public Domain license

Chris Fleck analyzes Cloud Computing Economics - Amazon EC2 vs Terremark v Cloud Express in this 9/5/2009 post:

The recent announcement of the Terremark Cloud offering has raised significant attention especially because of the competitive pricing and EC2 like features of elastic capacity and hourly charges with no commitment. On the surface the Terremark entry price of $0.036 per hour seems very low compared to Amazon EC2 at $0.10 but it's worth picking a few examples to provide a more apples to apples comparison. …

Barton George quotes GoGrid CEO John Kaegy in his CEO of GoGrid: IT economy to shrink (big time) over next 10 years 9/3/2009 interview for Dell Computers:

The CEO and founder of GoGrid, John Keagy, made an interesting assertion at Cloud World/Open Source World: over the next decade, the IT economy will shrink from $1.5 trillion to $500 billion.  I thought this was an interesting statement so I followed up with him after his talk and we sat down for a quick interview:

Some of the things John talks about:

  • GoGrid plays in the Infrastructure on demand space and has been doing so since 2002.
  • They work with partners in the layers above infrastructure and don’t have plans to venture north.
  • The IT economy shrinkage will be driven by automation and reduced capex (commodity hardware is a big component of this)
  • Right now its hardly a competitive market in the IaaS space (”its GoGrid and a bookstore”) so you can expect to see prices drop as the competition heats up.
  • If you’re not doing your test and development and QA in the cloud, your not engaging in best practices.

The VAR Guy reports Red Hat Warns of Microsoft Windows Azure Lock-In in this 8/2/2009 post:

During Red Hat Summit in Chicago, CEO Jim Whitehurst and Executive VP Paul Cormier warned attendees not to get locked into virtualization and cloud initiatives involving Microsoft Windows Azure and VMware. Here’s a recap of the morning keynotes.

Whitehurst opened up the morning by warning attendees about antiquated software modes of the 20th century that can’t keep pace with today’s fast-moving development and user needs. While Oracle CEO Larry Ellison and others are pitching visions, said Whitehurst, Red Hat will continue to listen to customers and deliver new value back to them. Instead of forcing a complete architecture on customers, Red Hat will continue to promote an “architecture of participation,” said Whitehurst.

Red Hat’s idea of an “architecture of participation” is participation in paying Red Hat support subscription fees for “open software.”

<Return to section navigation list> 

blog comments powered by Disqus