Monday, August 19, 2013

Windows Azure and Cloud Computing Posts for 8/12/2013+

Top Stories This Week:

A compendium of Windows Azure, Service Bus, BizTalk Services, Access Control, Caching, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1_thumb1_thumb_thu

‡ Updated 8/19/2013 with new articles marked .
• Updated
8/14/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Windows Azure Blob, Drive, Table, Queue, HDInsight and Media Services

<Return to section navigation list>

BusinessWire reported Hortonworks Updates Hadoop on Windows in an 8/13/2013 press release:

imageHortonworks, a leading contributor to and provider of enterprise Apache™ Hadoop®, today announced the general availability of Hortonworks Data Platform 1.3 (HDP) for Windows, a 100-percent open source data platform powered by Apache Hadoop. HDP 1.3 for Windows is the only Apache Hadoop-based distribution certified to run on Windows Server 2008 R2 and Windows Server 2012, enabling Microsoft customers to build and deploy Hadoop-based analytic applications. This release is further demonstration of the deep engineering collaboration between Microsoft and Hortonworks. HDP 1.3 for Windows is generally available now for download from Hortonworks.

Application Portability

Delivering on the commitment to provide application portability across Windows, Linux and Windows Azure environments, HDP 1.3 enables the same data, scripts and jobs to run seamlessly across both Windows and Linux. Now organizations can have complete processing choice for their big data applications and port Hadoop applications from one operating system platform to another as needs and requirements change.

New Business Applications Now Possible on Windows

imageNew functionality in HDP 1.3 for Windows includes HBase 0.94.6.1, Flume 1.3.1, ZooKeeper 3.4.5 and Mahout 0.7.0. These new capabilities enable customers to exploit net new types of data to build new business applications as part of their modern data architecture.

Hortonworks Data Platform 1.3 for Windows is the only distribution that enables organizations to run Hadoop-based applications natively on Windows and Linux, providing a common user experience and interoperability across operating systems. HDP for Windows offers the millions of customers running their businesses on Microsoft technologies an ecosystem-friendly Hadoop-based solution that integrates with familiar business analytics tools, such as Microsoft Excel and the Microsoft Power BI for Office 365 suite, and is built for the enterprise and Windows.

image_thumb75_thumb3_thumb_thumb_thumb[1]“Microsoft is committed to bringing big data to a billion users. To achieve this, we are working closely with Hortonworks to make Hadoop accessible to the broadest possible group of mass market and enterprise customers,” said Herain Oberoi, director, SQL Server Product Management at Microsoft. “Hortonworks Data Platform 1.3 helps us bring Hadoop to Windows so that Microsoft customers can get the best of Hadoop from Hortonworks on premises and from Microsoft in the cloud via HDInsight. In addition, customers can take advantage of integration with Microsoft’s leading business intelligence tools such as Power BI for Office 365, SQL Server and Excel.”

“Hortonworks continues to enable Windows users with powerful enterprise-grade Apache Hadoop,” said Bob Page, vice president, products, Hortonworks. “This new release enables organizations to build new types of applications that were previously not possible and to exploit the massive volumes and variety of data flowing into their data centers.”

Availability

Hortonworks Data Platform 1.3 for Windows is now available for download at: http://hortonworks.com/download/

About Hortonworks

Hortonworks is the only 100-percent open source software provider to develop, distribute and support an Apache Hadoop platform explicitly architected, built and tested for enterprise-grade deployments. Developed by the original architects, builders and operators of Hadoop, Hortonworks stewards the core and delivers the critical services required by the enterprise to reliably and effectively run Hadoop at scale. Our distribution, Hortonworks Data Platform, provides an open and stable foundation for enterprises and a growing ecosystem to build and deploy big data solutions. Hortonworks also provides unmatched technical support, training and certification programs. For more information, visit www.hortonworks.com. Go from Zero to Hadoop in 15 Minutes with the Hortonworks Sandbox.

image_thumb1_thumb_thumb_thumb

<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Haddy El Haggan (@Hhagan) described differences between Windows Azure Notification Hub – General Availability and Windows Azure Mobile Services in an 8/12/2013 post:

imageThe Windows Azure Service Bus Notification Hub is finally released and it is generally available to be used in the development. It supports multiple platform push notification like Google, Microsoft and apple push notification. The Notification Hub will easily help the application to reach millions of users through their mobile or windows application by simply sending them a Notification through the Service Bus.

imageHere are the differences between the Windows Azure Mobile Services and the Notification Hub push notification. This table is taken from Announcing General Availability of Windows Azure Notification Hubs & Support for SQL Server AlwaysOn Availability Group Listeners:

image


Miranda Luna (@mlunes90) announced the availability of New Mobile Services Samples in an  8/13/2013 post to the Windows Azure Team blog:

imageOur goal for Windows Azure is to power the world’s apps—apps across every platform and device from developers using their preferred languages, tools and frameworks. We took a another step toward delivering on the promise with the recent general availability announcement of Mobile Services.

image_thumb75_thumb3_thumb_thumb_thumbHere's a quick look at the new samples:

  • Web and mobile app for a marketing contest
  • Integration scenarios utilizing Service Bus Relay and BizTalk
  • Samples from SendGrid, Twilio, Xamarin and Redbit
  • Mobile Services sessions from //build

imageWe hope these will serve as inspiration for your own mobile application development. 

Web and mobile app for a marketing contest

The best user experience is one that’s consistent across every web and mobile platform.  Windows Azure Mobile Services and Web Sites allow you to do just that for both core business applications and for brand applications.   By sharing an authentication system and database or storage container between your web and mobile apps, as seen in the following demo, you can drive engagement and empower your users regardless of their access point.

In the following videos, Nik Garkusha demonstrates how Mobile Services and Web Sites can be used to create a consistent set of services used as a backend for an iOS app and a .NET web admin portal.


In Part 1, Nik covers using multiple authentication providers, reading/Writing data with tables and interacting with Windows Azure blob storage.


In Part 2, Nik continues by creating the admin portal using Web Sites, using with Custom API for cross-platform push notifications, and using Scheduler with 3rd Party add-ons for scripting admin tasks.

Integration scenarios utilizing Service Bus Relay and BizTalk

Modern businesses are often faced with the challenge of innovating and reaching new platforms while also leveraging existing systems.  Using Mobile Services with Service Bus Relay and BizTalk Server makes that possible.

In the following samples, Paolo Salvatori provides a detailed walk through of how to connect these services to enable such scenarios.

  • Integrating with a REST Service Bus Relay Service – This sample demonstrates how to integrate Mobile Services with a line of business application running on-premises via Service Bus Relay service and REST protocol.
  • Integrating with a SOAP Service Bus Relay Service – Here, a custom API can is used to invoke a WCF service that uses a BasicHttpRelayBinding endpoint to expose its functionality via a SOAP Service Bus Relay service.
  • Integrating with BizTalk Server via Service Bus – In this walk through, learn how to integrate Mobile Services with line of business applications, running on-premises or in the cloud, via BizTalk Server 2013, Service Bus Brokered Messaging, and Service Bus Relay. The Access Control Service is used to authenticate Windows Azure Mobile Services against the Windows Azure Service Bus. In this scenario, BizTalk Server 2013 can run on-premises or in a Virtual Machine on Windows Azure.
  • Integrating with Windows Azure BizTalk Services – See how to integrate Mobile Services with line of business applications, running on-premises or in the cloud, via Windows Azure BizTalk Services (currently in preview) and Service Bus Relay. The Access Control Service is used to authenticate Mobile Services against the XML Request-Reply Bridge  used by the solution to transform and route messages to the line of business applications.
Samples from SendGrid, Twilio, Xamarin and Redbit

In March, we reiterated our commitment to making it easy for developers to build and deploy cloud-connected applications for every major mobile platform using their favorite languages, tools, and services.  Today, I’m happy to share updates to both the Mobile Services partner ecosystem and the feature suite that support that ongoing commitment.

Giving developers easy access to their favorite third party services and rich samples for using Mobile Services with those services is one of our team’s highest priorities. When we unveiled the source control and Custom API features, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.

Our friends at SendGrid, Twilio, Xamarin and Redbit have all created sample apps to inspire developers to reimagine what’s possible using Mobile Services.

  • SendGrid eliminates the complexity of sending email, saving time and money, while providing reliable delivery to the inbox.  SendGrid released an iOS sample app that accepts and plays emailed song requests.  The SendGrid documentation center and Windows Azure dev center have more information on how send emails from a Mobile Services powered app.
  • Twilio provides a telephony infrastructure web service in the cloud, allowing developers to integrate phone calls, text messages and IP voice communications into their mobile apps.   Twilio released a iPad sample that allows event organizers to easily capture contact information for volunteers, store it using Mobile Services and enable tap-to-call using Twilio Client.  Twilio also published a new tutorial in the Windows Azure dev center that demonstrates how to use Twilio SMS & voice from a Mobile Services custom API script.
  • Xamarin is a framework that allows developers to create iOS, Android, Mac and Windows apps in C#.  Xamarin’s Craig Dunn recently recorded a video showing developers how to get started building a cloud-connected todolist iOS app in C#.
  • The SocialCloud app, recently developed by Redbit, underscores the importance of our partner ecosystem. In addition to Mobile Services, Web Sites and the above third party services, SocialCloud also uses Service Bus, Linux VMs, and MongoDB. 

Visit the Redbit blog to learn more about how they built SocialCloud and why they decided to use these services together.

Mobile Services sessions from //BUILD/

The //BUILD/ conference was packed with sessions covering every aspect of developing connected applications with Mobile Services.  The best part is that, even if you weren’t in San Francisco, every session is available on Channel 9.  Be sure to check out:

Summary

We’re committed to continuously delivering improvements to both platform and infrastructure services that developers can rely on when building modern consumer and business applications. Expect to see more new and exciting updates from us shortly.  In the meantime I encourage you to:

  • Visit the developer center to get started building mobile and apps 
  • Find answers to your questions in the Windows Azure forums and on Stack Overflow
  • Continue to make feature requests on the Mobile Services uservoice
  • Bookmark http://aka.ms/CommonWAMS to keep the most up to date mobile services samples right at your fingertips

If you have any questions, comments, or ideas for how we can make Windows Azure better suit your development needs, you can always find me on Twitter.


David Pallman started a mobile development series with Getting Started with Mobility, Part 1: Understanding the Landscape on 8/13/2013:

imageIn this series of posts, we're looking at how to get started as a mobile developer. Here in Part 1 we'll provide an overview of the landscape; in subsequent posts, we'll get down to the details of working in various platforms.

imageAs a mobile developer, you may find yourself specializing in one particular platform (e.g. "I'm an Android developer") or perhaps supporting all of them ("I'm a mobile web developer") or specializing in mobile back-ends (services, data, security, and cloud). All of these are important.

The Front-End

imageYou can develop mobile client apps natively, using a hybrid approach, or via mobile web. Let's look at them, one by one:

  • In native development, you are using a tool and language espoused by a mobile platform vendor as the official way to develop for their platform. Depending on the platform, there may be several endorsed languages or development environments to choose from. Native development is often considered the high road, because alternative approaches can sometimes compromise performance, limit fidelity to platform usability conventions, or restrict the functionality available to apps. However, native development can also be expensive and may be a mismatch for the skills known to your developers. 
  • In hybrid development, a third-party provides tools and a framework that allow to develop for a platform using an alternative approach. This may allow your developers to work in a familiar language or development tool. Some hybrid solutions generate a native application as their output; others execute within an execution layer or browser contained in a native application shell. Hybrid solutions are sometimes considered risky because the third-party may not be able to stay in alignment with mobility platforms.
  • In mobile web development, you develop a web site intended for consumption on mobile devices via their browsers. You detect device size and other characteristics and the web experience adapts accordingly. A mobile web approach is not always a suitable experience, but at times it can be. From a skills standpoint, mobile web is very approachable due to the large number of web developers in existence. An economical advantage to mobile apps is that you develop a single solution, rather than separate apps for each platform.

Many people have strong opinions about which approach is best, but be sure to consider the nuances of experience, device features needed, performance, skills alignment, risk, and development cost rather than making a snap decision.

The Back-End

Most mobile solutions involve more than just the app(s), that visible part you interact with on mobile devices. There often needs to be a back-end that provides one or more of the following:

  • Security - user authentication and authorization.
  • Storage - persistent data storage.
  • Notifications - event notifications.
  • Processing - you may need server-side processing to augment what the limited processing available on mobile devices.
  • Integration - coordination with other systems in the enterprise or on the Internet.
  • Cloud - use of cloud computing can make your back-end available across a wide geography or worldwide.

Table: Mobile Client App Development Choices

The table below shows some of the choices available to you when you are targeting Android, Apple, Windows Phone, and/or Windows 8 devices.

In Part 2, we'll look at what it takes to develop native applications for iOS.


Nick Harris (@cloudnick) described How to implement Periodic Notifications in Windows Store apps with Azure Mobile Services Custom API in a 8/12/2013 article:

imageI blogged previously on how to make your Push Notification implementation more efficient.  In this post I will detail an alternative to Push, that is Periodic Notifications, which is are a poll based solution for updating your live Tiles and Badge content (Note that it can’t be used for Toast or Raw). It turns out if your app scenario can deal with only receiving notifications every thirty minutes or more that this is a much easier way for you to update your tiles.  All you need to do is configure you app for periodic notifications, and point it at a service API that will return the appropriate XML template for the badge or tile update in your app whether you are using WCF, Web API or others this is quite an easy thing to achieve.  In this case I will demonstrate how you can implement this using Mobile Services.

imageLet’s start with the backend service by creating a Custom API.  To do this all you need to do is select the API tab in the Mobile Services portal.

MobileServices_CustomAPITabimageNext provide a name for your endpoint and set the permissions for get to everyone

MobileServices_CustomAPIDialog

Next define the return XML return payload for your custom API.  You can see examples of each different Tile templates here

exports.get = function(request, response) {
    // Use "request.service" to access features of your mobile service, e.g.:
    //   var tables = request.service.tables;
    //   var push = request.service.push;
 
    response.send(200, ''+
                            ''+
                                ''+
                                    ''+
                                    '@ntotten enjoying himself a little too much :) '+
                                ''+
                            ''+
                        '');
};
    Note:
  • If you are supporting multiple tile sizes you should sending the whole payload in for each of the varying tile sizes
  • This is just an example  – you really should be providing dynamic content here rather than the same tile template every time

Next configure you’re app to point at your Custom API through the package.appxmanifest

VisualStudio2013_PackageManifestPeriodicNotifications

Run your app, ensure your app is pinned to start in the right dimension for the content you are returning e.g

Windows_PinLargeTile

Pin your tile and wait for the update after app has been run once.

Windows_LargeTileUpdatedByPeriodicNotification

Job done! – the tile will be updated with the content from your site with the periodic update per your package manifest definition.

image_thumb18_thumb_thumb_thumb


<Return to section navigation list>

Windows Azure Marketplace DataMarket, Cloud Numerics, Big Data and OData

Bruno Terkaly (@brunoterkaly) asserted OData In The Cloud – One Of The Most Flexible And Powerful Ways To Provide Scalable Data Services To Virtually Any Client in a 9/15/2013 post:

Introduction

imageThis post is dedicated to illustrating how you can create your own OData provider and host it in the cloud, specifically Windows Azure.

Open Data Protocol (a.k.a OData) is a data access protocol designed to provide standard CRUD access to a data source via a website. It is similar to JDBC and ODBC although OData is not limited to SQL databases.

image_thumb8_thumb_thumb_thumbOData can be thought of as an extension to REST and provides efficient and flexible ways for sharing data in a standardized format that is easily consumed by other systems. It uses well known web technologies like HTTP, AtomPub and JSON. OData is a resource-based Web protocol for querying and updating data.
OData performs operations on resources using HTTP verbs (PUT, POST, UPDATE and DELETE). It identifies those resources using a standard URI syntax. Data travels across the wire over HTTP using the AtomPub or JSON standards.

Generally speaking, I would data leverages relational databases as the data store. But what I would like to illustrate is how to leverage a simple text file as the data store. I believe this will give you a quick and easy introduction to the way everything works.

Internally at Microsoft there are many products that leverage OData:

  • Windows Azure Data Market
  • Azure Table Storage uses OData, SharePoint 2010 allows OData Queries
  • Excel PowerPivot.

There are many advantages to OData

  • OData gives you an entire query language directly in the URL.
  • The client only gets the data that it requests - no more are no less
  • The client is very flexible, because it controls queries, not the server, which frees you from having to anticipate all the types of queries you need to support on the backend
  • It can request the data in various formats, such as XML, JSON, or AtomPub
  • Any client can consume the OData protocol
  • You don't need to learn the programming model of a service to program against the service
  • There are a lot of client libraries available, such as the as Microsoft .NET Framework client, AJAX, Java, PHP and Objective-C, and more.
  • OData supports server paging limits, HTTP caching support, stateless services, streaming support and a pluggable provider model
  • You can leverage LINQ as a query language
Starting with government data

The city of San Francisco provides data available for download. So what I did is download crime statistics for the trailing three months. I reduced the 30,000 records to just a few hundred to make development a little bit easier.

One thing the example does not illustrate is how to make this extremely efficient by leveraging caching. This can be easily added to the project, but was avoided in the sake of simplicity.

We will use Visual Studio 2012 and will update some assemblies by using NuGet. That is an essential piece that is necessary for success.

Starting Visual Studio

Once you have Visual Studio up and running, choose File/New from the menu and select Cloud Project as seen below.

t1uabrf3

Add an ASP.Net Web Role to your solution, as seen below. There are other options, but this one is probably the most familiar to developers today. Click OK when finished.

jlgtiklo
Solution Explorer should look like this:

image
As you can see from figure above, there are two projects in the solution. The top one is for deployment purposes, while the bottom one is where we will add our OData code to get the job done.

Downloading data

You can navigate to the following URL to download some sample crime data.
https://data.sfgov.org/

wodunkih
I downloaded this data, removed some rows, and added it to the App_Data folder.
zxw4waxs
Note the file called, PoliceData.txt in the figure above.

Adding code

Now we are ready to start adding some code to process this data. We will begin by adding a couple of classes.

In Visual Studio, right mouse click on the web role and add a class as seen below. Name this class CrimeProvider.

image

There are some important points to notice about the code below:

  • There are two classes to note - CrimeData and CrimeProvider.
  • CrimeData represents the data that will travel across the wires from server to client. Notice that we are only sending four fields of data back from the actual data to the client (incident, crime type, crime date, and address).
  • The second class as CrimeProvider. This class will parse the data and build an array of CrimeData, specifically List<CrimeData> crimes. Notice the method called LoadData, which first parses the data into rows, then splits it into columns, and finally by loading the CrimeData structure.
  • IQueryable is an important interface that makes your data queryable by a client. This is a mandatory code starting at line 57 and ending at line 60.

Also notice that we added a using statements:

image

CrimeProvider.svc.cs

using System;
using System.Collections.Generic;
using System.Data.Services.Common;
using System.IO;
using System.Linq;
using System.Net;
using System.Web;
using Microsoft.Data.OData;
namespace WebRole1
{
  [DataServiceKey("Incident")]
public class CrimeData
  {
public string Incident { get; set; }  // col 0
public string CrimeType { get; set; }  // col 2
public DateTime CrimeDate { get; set; } // col 4
public string Address { get; set; } // col 8
  }
public class CrimeProvider
  {
private List<CrimeData> crimes = new List<CrimeData>();
public CrimeProvider()
    {
      WebRequest request = WebRequest.CreateDefault(new Uri(HttpContext.Current.Server.MapPath("~/App_Data/PoliceData.txt")));
      WebResponse response = request.GetResponse();
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
      {
string data = reader.ReadToEnd();
        LoadData(data);
      }
    }
public CrimeProvider(string data)
    {
      LoadData(data);
    }
private void LoadData(string data)
    {
string[] rows = data.Split('\n');
for (int i = 1; i < rows.Length - 1; i++)
      {
        rows[i] = rows[i].Trim();
string[] cols = rows[i].Split('\t');
        crimes.Add(new CrimeData
        {
          Incident = cols[0],
          CrimeType = cols[2],
          CrimeDate = Convert.ToDateTime(cols[4]),
          Address = cols[8]
        });
      }
    }
public IQueryable<CrimeData> Crimes
    {
      get { return crimes.AsQueryable(); }
    }
  }
}

3 ways to write a provider

There are three methods that can be used to create an Odata back end:

  1. EF Provider - easy to use
  2. Reflection Provider - what I used
  3. Custom Provider

The technique used today will be a reflection provider. The EF provider is another popular way that makes it easy to leverage a relational database using the framework. The Custom providers is more technically challenging, but offers the greatest flexibility. …

Bruno continues with detailed code examples.


<Return to section navigation list>

Windows Azure Service Bus, BizTalk Services and Workflow

SearchWinDevelopment published my (@rogerjenn) Pay-As-You-Go Windows Azure BizTalk Services Changes EAI and EDI on 7/15/2013 (missed when posted):

imageMicrosoft released its latest incarnation of an earlier Windows Azure EAI and EDI Labs incubator project for cloud-based enterprise application integration and electronic data interchange at TechEd North America 2013. The Windows Azure BizTalk Services (WABS) preview lets developers combine Message Bus 2.1 and Workflow 1.0 with BizTalk Server components to manage message itineraries from the Azure cloud to on-premises line-of-business apps, at costs ranging from 6.5 cents to $4 per hour. (Hourly prices will double when WABS enters general availability.)

image_thumb75_thumb3_thumb_thumb_thumb[2]Enterprise application integration (EAI) projects consume more than 30%of current IT spending, according to Bitpipe.com. Three of the primary components of EAI solutions are the following:

  1. Message-oriented-middleware (MOM) to provide connectivity between applications by manipulating and passing messages in queues;
  2. Adapters to standardize connections between common packaged applications and message formats, such as Electronic Document Interchange (EDI, ANSI X12 and UN/EDIFACT dialects) between trading partners over HTTP (AS2) or FTP (AS3) transports, as well as Web services (AS4);
  3. Workflow management tools to orchestrate transactions initiated by business-to-business (B2B) message flows

imageBizTalk Server (BTS), which Microsoft introduced in 2000, is a packaged enterprise service bus (ESB) originally intended as MOM for constructing on-premises EAI and B2B solutions. BTS 2013 enables graphical process modeling and customization with Visual Studio 2012 and provides a Windows Communication Foundation (WCF) Adapter set for multiple transports. The included BizTalk Adapter Pack provides connectivity with 15 Line-of-Business (LOB) systems, such as SAP, Oracle Database and eBusiness Suite, IBM WebSphere and DB2, Seibel, PeopleSoft, JD Edwards and TIBCO.

Running the WABS numbers

imageImplementing a BTS solution requires a major up-front investment in license fees, which can be as much as $50,000 ($43,000 for the Enterprise edition on up to a four-core processor, plus $3,400 for SQL Server standard edition on two cores and $1,300 for two two-core Windows Server 2012 instances.) Add $25,000 for servers, networking hardware and data center space, as well as $25,000 in commissioning costs. A $100,000 bill before paying BizTalk consultants and developers means BTS on premises isn't practical for most small and many medium-size businesses. Substituting the Standard edition, which doesn't support scaling, reduces licensing costs to about $10,000, but there remains a hefty up-front investment for a small firm.

Windows Azure IaaS offers Virtual Machines (WAVMs) with images preconfigured with BTS Standard or Enterprise editions, which range in cost from $0.84 (Standard edition on a Medium-size instance) to $6.52 (Enterprise edition on an Extra Large instance) per hour. These prices translate to about $625 to $4,851 per month, based on 544 hours. This pay-as-you-go approach eliminates the up-front cost and lets you quickly scale up your BTS installation as business increases. However, you're required to configure, manage and protect your cloud servers. An advantage of running BTS in a WAVM is the capability to move BizTalk applications between the cloud and an on-premises data center.

Pricing

Table 1. Prices and limitations of the four Windows Azure BizTalk Services versions.

Small firms and independent developers and consultants get a break with Windows Azure BizTalk Services (WABS), which ranges in cost during its preview stage from $0.065/hour (~$48/month) for the Developer edition to $4.03/hour ($2,998/month) for the Premium edition, plus standard Windows Azure data transfer charges. These hourly/monthly prices (see Table 1) reflect a 50% preview discount. Each version has twice the compute resources of its predecessor: Basic has twice the resources of Developer, Standard has twice the resources of Basic and Premium has twice the resources of Standard.

Prices reflect a 50%discount during the preview period. Standard Windows Azure data transfer charges apply. A scalability unit corresponds to a BizTalk Server unit, which represents a single CPU core. (Data is from the Introduction to Windows Azure BizTalk Services session at TechEd North America 2013.) …

Read more (might require free registration.)


‡ Nick Harris (@cloudnick) reported Updated NotificationsExtensions WnsRecipe Nuget to support Windows 8.1 templates now available on 816/2013:

imageA short post to let you know that I have just published the updated NotificationsExtensions WnsRecipe Nuget with support for the new notification templates that were added in Windows 8.1.

Here is a short demonstration of how to use it to send a new TileSquare310x310ImageAndText01 template with the WnsRecipe Nuget Package

imageInstall the package using Nuget Package Manager Console. (Note you could also do this using Manage package references in solution explorer)

install-package WnsRecipe

Add using statements to the NotificationsExtensions namespace

using NotificationsExtensions;
using NotificationsExtensions.TileContent;

imageNew up a new WnsAccessTokenProvider and provide it your credentials configured in the Windows Store app Dashboard

private WnsAccessTokenProvider _tokenProvider = new WnsAccessTokenProvider("ms-app://", "");

Use Tile Content Factory to create your tile template

var tile = TileContentFactory.CreateTileSquare310x310ImageAndText01();

tile.Image.Src = "https://nickha.blob.core.windows.net/tiles/empty310x310.png";
tile.Image.Alt = "Images";
tile.TextCaptionWrap.Text = "New Windows 8.1 Tile Template 310x310";

// Note you really should not do the line below :) , 
// instead you should be setting the required content 
// through property tile.Wide310x150Content so that users
// get updates irrespective of what size tile they have pinned to Start
tile.RequireWide310x150Content = false;  

//Send the notification to the desired channel
var result = tile.Send(new Uri(channel), _tokenProvider);

and here is the output:

310x310tile

image_thumb11_thumb2_thumb


<Return to section navigation list>

Windows Azure Access Control, Active Directory, Identity and Workflow

image_thumb75_thumb3_thumb_thumb_thumb[4]No significant articles so far this week.

image_thumb7_thumb_thumb


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

image_thumb75_thumb3_thumb_thumb_thumb[5]No significant articles so far this week.

image_thumb11_thumb_thumb_thumb


<Return to section navigation list>

Windows Azure Cloud Services, Caching, APIs, Tools and Test Harnesses

‡ SearchCloudComputing published My (@rogerjenn) Unlock Windows Azure Development in Visual Studio 2013 Preview with the .NET SDK 2.1 on 8/15/2013:

imageMicrosoft's accelerated update schedule for Visual Studio appeared at first to have caught the Windows Azure team off-guard. Corporate vice president S. "Soma" Somasegar outlined the new features of the Visual Studio 2013 Preview and announced its availability for download at this summer's Build developer conference in June. Of particular interest to Windows Azure developers were the new features:

  • The capability to create and edit new Mobile Services (WAMS) in the Visual Studio IDE
  • Right-click publishing with preview, per-publish profile web.config transforms and selective publishing with diffing for Web Sites (WAWS)
  • A tree view of Windows Azure subscriptions and dependent resources in the Server Explorer (see Figure 1)
  • Windows Azure Active Directory (WAAD) support for Web applications

imageHowever, the Visual Studio (VS) 2013 Preview didn't support the then-current .NET SDK 2.0 for Windows Azure. Therefore, most Azure-oriented developers elected to wait for an updated SDK to avoid the inefficiency of different developer environments for cloud and on-premises .NET app development.

Visual Studios 2013 Preview

Figure 1. VS 2013 Preview's enhanced Server Explorer running under Windows 8.1 Preview displays Azure subscriptions and attendant resources in a hierarchical tree view; creating a new Windows Azure Cloud Service offers a choice of six Web or worker roles.

imageVS 2013 Preview wasn't off-limits to Windows Azure developers for very long. Scott Guthrie announced the release of the Windows Azure SDK 2.1 for .NET on July 31. According to Guthrie, this SDK offers the following new features:

  • Visual Studio 2013 Preview support: Windows Azure SDK now supports the new VS 2013 Preview
  • Service Bus: New high availability options, notification hub support, improved VS tooling
  • Visual Studio 2013 VM image: Windows Azure now has a built-in VM image for hosting and developing with VS 2013 in the cloud
  • Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources
  • Virtual machines: Start and Stop VMs with suspended billing directly from within Visual Studio
  • Cloud services: Emulator Express option with reduced footprint and Run as Normal User support
  • PowerShell Automation: Lots of new PowerShell commands for automating Web sites, cloud services, VMs and more

Hot on the SDK's heels came an updated Windows Azure Training Kit (WATK) with new and refreshed content for the SDK for .NET 2.1. …

Read more (might require no-charge registration.)


• Alexandre Brisebois (@Brisebois) described how to Create a Dev & Test Environment in Minutes! in an 8/13/2013 post:

imageHow many times do we have to scramble to assemble a decent Dev & Test environment?

When I think back to my past lives, I can attest that it’s been a challenging mess. I used to run around to various departments in order to find available machines, software installation disks, licenses and IT resources to help me put everything together.

image_thumb75_thumb3_thumb_thumb_thumb[6]Taking shortcuts usually meant cutting back on the Dev & Test infrastructure. Consequently, I rarely had environments that mirrored the actual production environment. Products would make their way through development and quality assurance, but I rarely had a clear picture of how it would react to the production environment. Deploying to production usually resulted in being asked to come in on weekends because the outcome was completely unpredictable and that time needed to be scheduled in order to rollback.

Just thinking about all this sends chills down my spine!

Since then things have change quite a bit. I found shortcuts allowing me to build cost effective environments without having to run around begging for resources. Microsoft has recently introduced Dev & Test that allows me to setup my environments in a matter of minutes!

If you’re already an MSDN subscriber then you’re all set!  Visual Studio Professional, Premium or Ultimate MSDN subscriptions will permit you to activate Dev & Test by creating a Windows Azure subscription from your MSDN subscription benefits page.

Using the MSDN Windows Azure subscription, I can spin up virtual machines that allow me to test various scenarios. For example, I can choose from a variety of pre-configured Virtual Machines like Windows Server, SQL Server, SharePoint and BizTalk. With discounts ranging from 25% on Cloud Services to upwards of 33% on BizTalk Enterprise Virtual Machines.

More Details

    There are a couple of interesting benefits to building my Dev & Test environments on Windows Azure.

    First of all, it’s great for short lived projects. I can create environments without major capital investments and I can rapidly decommission Virtual Machines, services and reserved resources when the project comes to an end. Best of all I don’t get stuck with the extra hardware and software licenses.

Waiting after IT departments is a thing of the past, I can get up and running quickly!

Using Windows Azure Dev & Test, I can cycle through proof of concepts using various Virtual Machine configurations. Easily playing around with OS versions, the # GBs of RAM, the # of CPU cores and the amount of available bandwidth allows me spot potential pain points before going to production. Doing the same kind of tests on-premise can be quite complex due to the sheer amount time required to deal with all the hardware and software involved.

On Windows Azure, creating a new Virtual Machine is a breeze!

Login to the Windows Azure Management Portal and click on the NEW + menu found at the bottom left of the screen.

8-13-2013 7-36-30 PM

Select QUICK CREATE from the VIRTUAL MACHINE option found under COMPUTER. Then complete the form by providing your new Virtual Machine with a name, a size and by selecting the base image from the dropdown list. Provide Windows Azure with a user name and password that you will use to login. Finally select the region where you want to create your Virtual Machines.

8-13-2013 7-34-01 PM

There are quite a few pre-configured Virtual Machine Images available. If you don’t find what you are looking for, you can create your own by creating a Virtual Machine Image on-premise and by uploading it to Windows Azure. You will then be able to provision Virtual Machines base on your custom Image. See the full list of Microsoft server software supported on Windows Azure Virtual Machines.

8-13-2013 7-33-14 PM    8-13-2013 7-46-14 PM

Virtual Machine and Cloud Service Sizes for Windows Azure are listed below. I usually work with Medium sized Virtual Machines because my software requires quite a bit of RAM.

Choosing the right Virtual Machine size can be challenging and being able to try them out is a huge advantage. At this point it’s also important to note that along with CPU, RAM and Disk Size each configuration comes with a specific amount of Bandwidth. Be sure that your application does not suffer because it lacks Bandwidth.

8-13-2013 7-35-14 PM

Clicking on CREATE A VIRTUAL MACHINE will start provisioning a Virtual Machine based on your specifications. This is the perfect time to get yourself a cup of coffee, by the time you get back you will be presented with your brand new Virtual Machine.

8-13-2013 8-17-10 PM

Clicking on the Virtual Machine will bring you to it’s dashboard.

8-13-2013 8-26-17 PM

This is where you are presented with diagnostics, configurations and general information about the Virtual Machine. Use this information to monitor and diagnose performance problems without using Remote Desktop.

The Windows Azure Management Portal will also provide you with the following commands.

8-13-2013 8-31-50 PM

Use CONNECT to safely Remote Desktop into your newly created Virtual Machine. RESTART or SHUTDOWN the Virtual Machine directly from the Windows Azure Management Portal. Deleting the Virtual Machine will release its resources back to Windows Azure.

Take Away

Working with Windows Azure over the last year, I have to say that much of the pain associated with creating and managing Dev & Test environments has gone away. I can finally concentrate on finding the right solution for my client’s needs without having to deal with too much politics and the red tape that comes with it.

Being able to spin up machines at a moments notice has allowed to me rapidly confirm and validate possible architectures. Above all else, it’s allowed me to do so at a very low cost because I don’t need Virtual Machines to
run 24/7.

Keep in mind that prices used in this post have been taken from August 2013 and may have changed over time. Please refer to the official pricing on windowsazure.com.

Shutting down Virtual Machines when I don’t need them ends up saving me quite a bit of money! I currently start the Virtual Machine when I start working in the morning and I shut it down when I go home at night. I’m currently paying for about 8 hours worth of compute time per day. Since its only running for 8 hours per day I’m currently paying $0.96/day instead of $2.88/day, which corresponds to a full day’s worth of compute.

Lets put this back into perspective, because daily pricing doesn’t really give a good idea of the actual cost for my Dev & Test environment. So lets look at this on a long term basis. My projects usually go for 3 months, working on average 20 days per month. That means that my Dev & Test environment is costing me a total of $57.60 for the duration of whole project. Keep in mind that if my Virtual Machine had been running 24/7 it would have cost me $144.

Nevertheless, savings generated by the discounted pricing of the Windows Azure Dev & Test offering are quite significant and do make a world of difference in the long run.

I use Windows Azure Dev & Test environments because:

  • You can connect securely from anywhere (working from home)
  • You can test load and scalability scenarios
  • You can use PowerShell to automate their creation
  • You can develop Windows & Linux based solutions
  • You can test newly release software (SQL Server, BizTalk, SharePoint…)
  • You don’t have to wait for hardware, procurement or internal processes
  • You pay for what you use (by the minute billing)
  • You benefit from discounted hourly rates
  • You get monthly Windows Azure Credits
  • You can Dev & Test in the cloud and deploy on-premise
  • You can use MSDN Software on Windows Azure

Return to section navigation list>

Windows Azure Infrastructure and DevOps

Scott Guthrie (@ScottGu) described Windows Azure: General Availability of SQL Server Always On Support and Notification Hubs, AutoScale Improvements + More updates in a 8/12/2013 post:

image_thumb75_thumb3_thumb_thumb_thumb[6]This morning we released some major updates to Windows Azure.  These new capabilities include:

  • SQL Server AlwaysOn Support: General Availability support with Windows Azure Virtual Machines (enables both high availability and disaster recovery)
  • Notification Hubs: General Availability Release of Windows Azure Notification Hubs (broadcast push for Windows 8, Windows Phone, iOS and Android)
  • AutoScale: Schedule-based AutoScale rules and richer logging support
  • Virtual Machines: Load Balancer Configuration and Management
  • Management Services: New Portal Extension for Operation logs + Alerts

imageAll of these improvements are now available to use immediately (note: AutoScale is still in preview – everything else is general availability).  Below are more details about them.

SQL Server AlwaysOn Support with Windows Azure Virtual Machines

I’m excited to announce the general availability release of SQL Server AlwaysOn Availability Groups support within Windows Azure.  We have updated our official documentation to support Availability Group Listeners for SQL Server 2012 (and higher) on Windows Server 2012.

SQL Server AlwaysOn Availability Group support, which was introduced with SQL Server 2012, is Microsoft’s premier solution for enabling high availability and disaster recovery with SQL Server.  SQL Server AlwaysOn Availability Groups support multi-database failover, multiple replicas (5 in SQL Server 2012, 9 in SQL Server 2014), readable secondary replicas (which can be used to offload reporting and BI applications), configurable failover policies, backups on secondary replicas, and easy monitoring. 

Today, we are excited to announce that we support the complete SQL Server AlwaysOn Availability Groups technology stack with Windows Azure Virtual Machines - including enabling support for SQL Server Availability Group Listeners.  We are really excited to be the first cloud provider to support the full range of scenarios enabled with SQL Server AlwaysOn Availability Groups – we think they are going to enable a ton of new scenarios for customers.

High Availability of SQL Servers running in Virtual Machines

You can now use SQL Server AlwaysOn within Windows Azure Virtual Machines to achieve high availability and global business continuity.  As part of this support you can now deploy one or more readable database secondaries – which not only improves availability of your SQL Servers but also improves efficiency by allowing you to offload BI reporting tasks and backups to the secondary machines.

Today’s Windows Azure release includes changes to better support SQL Server AlwaysOn functionality with our Windows Azure Network Load Balancers.  With today’s update you can now connect to your SQL Server deployment with a single client connection string using the Availability Group Listener endpoint.  This will automatically route database connections to the primary replica node – and our network load balancer will automatically update to route requests to a secondary replica node in the event of an automatic or manual failover scenario:

image

This new SQL Server Availability Group Listener support enables you to easily deploy SQL Databases in Windows Azure Virtual Machines in a high-availability configuration, and take full advantage of the full SQL Server feature-set.  It can also be used to ensure no downtime during upgrade operations or when patching the virtual machines.

Disaster Recovery of an on-premises SQL Server using Windows Azure

In addition to enabling high availability solutions within Windows Azure, the new SQL Server AlwaysOn support can also be used to enable on-premise SQL Server solutions to be expanded to have one or more secondary replicas running in the cloud using Windows Azure Virtual Machines.  This allows companies to enable high-availability disaster recovery scenarios – where in the event of a local datacenter being down (for example: due to a hurricane or natural disaster, or simply a network HW failure on-premises) they can failover and continue operations using Virtual Machines that have been deployed in the cloud using Windows Azure.

image

The diagram above shows a scenario where an on-premises SQL Server AlwaysOn Availability Group has been defined with a 2 database replicas - a primary and secondary replica (S1).  One more secondary replica (S2) has then been configured to run in the cloud within a Windows Azure Virtual Machine.  This secondary replica (S2) will continuously synchronize transactions from the on-premises primary replica.  In the event of a disaster on-premises, the company can failover to the replica in the cloud and continue operations without business impact. 

In addition to enabling disaster recovery, the secondary replica(s) can also be used to offload reporting applications and backups. This is valuable for companies that require maintaining backups outside of the data center for compliance reasons, and enables customers to leverage the replicas for compute scenarios even in non-disaster scenarios. 

Learn more about SQL Server AlwaysOn support in Windows Azure

You can learn more about how to enable SQL Server AlwaysOn Support in Windows Azure by reading the High Availability and Disaster Recovery for SQL Server in Windows Azure Virtual Machines documentation.  Also review this TechEd 2013 presentation: SQL Server High Availability and Disaster Recovery on Windows Azure VMs.  We are really excited to be the first cloud provider to enable the full range of scenarios enabled with SQL Server AlwaysOn Availability Groups – we think they are going to enable a ton of new scenarios for customers.

Windows Azure Notification Hubs

I’m excited today to announce the general availability release of Windows Azure Notification Hubs.  Notification Hubs enable you to instantly send personalized, cross-platform, broadcast push notifications to millions of Windows 8, Windows Phone 8, iOS, and Android mobile devices. 

I first blogged about Notification Hubs starting with the initial preview of Notification Hubs in January.  Since the initial preview, we have added many new features (including adding support for Android and Windows Phone devices in addition to Windows 8 and iOS ones) and validated that the system is ready for any amount of scale that your next app requires.

You can use Notification Hubs from both Windows Azure Mobile Services or any other custom Mobile Backend you have already built (including non-Azure hosted ones) – which makes it really easy to start taking advantage of from any existing app.

Notification Hubs: Personalized cross platform broadcast push at scale

Push notifications are a vital component of mobile applications.  They’re the most powerful customer engagement mechanism available to mobile app developers.  Sending a single push notification message to one mobile user is relatively straight forward (and is already easy to-do with Windows Azure Mobile Services today).  But sending simultaneous push notifications in a low-latency way to millions of mobile users, and handling real world requirements such as localization, multiple platform devices, and user personalization is much harder.

Windows Azure Notification Hubs provide you with an extremely scalable push notification infrastructure that helps you efficiently route cross-platform, personalized push notification messages to millions of users:

  • Cross-platform. With a single API call using Notification Hubs, your app’s backend can send push notifications to your users running on Windows Store, Windows Phone 8, iOS, or Android devices.
  • Highly personalized. Notification Hubs' built-in templating functionality allows you to let the client chose the shape, format and locale of the notifications it wants to see, while keep your backend code platform independent and really clean.
  • Device token management. Notification Hubs relieves your backend from the need to store and manage channel URIs and device tokens used by Platform Notification Services (WNS, MPNS, Apple PNS, or Google Cloud Messaging Service). We securely handle the PNS feedback, device token expiry, etc. for you.
  • Efficient tag-based multicast and pub/sub routing. Clients can specify one or more tags when registering with a Notification Hub thereby expressing user interest in notifications for a set of topics (favorite sport/teams, geo location, stock symbol, logical user ID, etc.). These tags do not need to be pre-provisioned or disposed, and provide a very easy way for apps to send targeted notifications to millions of users/devices with a single API call, without you having to implement your own per-user notification routing infrastructure.
  • Extreme scale. Notification Hubs are optimized to enable push notification broadcast to thousands or millions of devices with low latency. Your server back-end can fire one message into a Notification Hub, and thousands/millions of push notifications can automatically be delivered to your users, without you having to re-architect or shard your application.
  • Usable from any backend. Notification Hubs can be easily integrated into any back-end server app using .NET or Node.js SDK, or easy-to-use REST APIs. It works seamlessly with apps built with Windows Azure Mobile Services. It can also be used by server apps hosted within IaaS Virtual Machines (either Windows or Linux), Cloud Services or Web-Sites.
Bing News: Using Windows Azure Notification Hubs to Deliver Breaking News to Millions of Devices

A number of big apps started using Windows Azure Notification Hubs even before today’s General Availability Release.  One of them is the Bing News app included on all Windows 8 and Windows Phone 8 devices.

The Bing News app needs the ability to notify their users of breaking news in an instant. This can be a daunting task for a few reasons:

  • Extreme scale: Every Windows 8 user has the News app installed, and the Bing backend needs to deliver hundreds of millions of breaking news notifications to them every month
  • Topic-based multicast: Broadcasting push notifications to different markets, based on interests of individual users, requires efficient pub sub routing and topic-based multicast logic
  • Cross-platform delivery: Notification formats and semantics vary between mobile platforms, and tracking channels/tokens across them all can be complicated

Windows Azure Notification Hubs turned out to be a perfect fit for Bing News, and with the most recent update of the Bing News app they now use Notification Hubs to deliver push notifications to millions of Windows and Windows Phone devices every day.

image

The Bing News app on the client obtains the appropriate ChannelURIs from the Windows Notification Service (WNS) and the Microsoft Push Notification Service (MPNS), for the Windows 8 and Windows versions respectively, and then registers them with a Windows Azure Notification Hub . When a breaking news alert for a particular market has to be delivered, the Bing News app uses the Notification Hubs to instantly broadcast appropriate messages to all the individual devices.  With a single REST call to the Notification Hub they can automatically filter the customers interested in the topic area (e.g. sports update) and instantly deliver the message to millions of customers:

image

Windows Azure handles all of the complex pub/sub filtering logic for them, and efficiently handles deliver of the messages in a low-latency way.

Create your first Notification Hubs Today

Notification Hubs support a free tier of usage that allows you to send 100,000 operations every month to 500 registered devices at no cost – which makes it really easy to get started. 

To create a new Notification Hub simply choose  New->App Services->Service Bus->Notification Hub within the Windows Azure Management Portal:

image

Creating a new Notification Hub takes less than a minute, and once created you can drill into it to see a dashboard view of activity with it.  Among other things, the dashboard allows you to see how many devices have been registered with it, how many messages have been pushed to it, how many messages have been successfully delivered via it, and how many have failed:

image

Once your hub is created, click the “Configure” tab to enter your app credentials for the various push notifications services (Windows Store/Phone, iOS, and Android) that your Notification Hub will coordinate with:

image

And with that your notification hub is ready to go!

Registering Devices and Sending out Broadcast Notifications

Now that a Notification Hub is created, we’ll want to register device apps with it.  Doing this is really easy – we have device SDKs for Windows 8, Windows Phone 8, Android, and iOS. 

Below is the code you would write within a C# Windows 8 client app to register a user’s interest in broadcast notifications sent to the “myTag” or “myOtherTags” tags/topics:

await hub.RegisterNativeAsync(channel.Uri, new string[] { "myTag", "myOtherTag" });

Once a device is registered, it will automatically receive a push notification message when your app backend sends a message to topics/tags it is registered with.   You can use Notification Hubs from a Windows Azure Mobile Service, a custom .NET back-end app, or any other app back-end with our Node.js SDK or REST API.  The below code illustrates how to send a message to the Notification Hub from a custom .NET backend using the .NET SDK:

var toast = @"<toast><visual><binding template=""ToastText01""><text id=""1"">Hello everybody!</text></binding></visual></toast>";

await hub.SendWindowsNativeNotificationAsync(toast);

A single call like the one above from your app backend will now securely deliver the message to any number of devices registered with your Notification Hub.  The Notification Hub will handle all of the details of the delivery irrespective of how many users you are sending it to (even if there are 10s of millions of them). 

Scaling and Monitoring your Notification Hub

Once you’ve built your app, you can easily scale it to millions of users directly from the Windows Azure management portal.  Just click the “scale” tab in your Notification Hub within the management portal to configure the number of devices and messages you want to support:

image

In addition to scaling capacity, you can also monitor and track nearly 50 different metrics about your notifications and their delivery to your customers:

image

Learn More about Notification Hubs

Learn more about Notification Hubs using the Notification Hubs service page, where you will find video tutorials, in-depth scenario guidance, and link to SDK references.

We are happy to continue offering Notification Hubs at no charge to all Windows Azure subscribers through September 30, 2013.  We will begin billing for Notification Hubs consumption in the Basic and Standard tiers on October 1, 2013.  A Free Tier will continue to also be available and supports 100,000 notifications with 500 registered devices each month at no cost.

AutoScale: Scheduled AutoScale Rules and Richer Logging

This summer we introduced new AutoScale support to Windows Azure that enables you to automatically scale Web Sites, Cloud Services, Mobile Services and Virtual Machines.  AutoScale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention required) so that you can achieve the ideal performance and cost balance. Once configured, AutoScale will regularly adjust the number of instances running in response to the load in your application.

Today, we are introducing even more AutoScale features – including the ability to proactively adjust your Cloud Service instance count using time scheduled rules.

Schedule AutoScale Rules

If you click on the Scale tab of a Cloud Service, you’ll see that we’ve now added support for you to configure/control different scaling rules based on schedule rules.

By default, you’ll edit scale settings for No scheduled times – this means that your scale settings will always be the same regardless of the time/day. You can scale manually by selecting None in the Scale by Metric section – this will give you the traditional Instance Count slider that you are familiar with:

image

Or you can AutoScale dynamically by reacting to CPU activity or Queue Depth.  The below screen-shot demonstrates configuring an auto-scale rule based on the CPU of the WebTier role and indicates to scale between 1 and 3 instances – depending on the aggregate CPU:

image

With today’s release, we also now allow you to setup different scale settings for different times of the day.  You can enable this by clicking the “Set up Schedule Times” button above.  This brings up a new dialog:

image

With today’s release we now offer the ability to define two different recurring schedules: Day and Night. The first schedule, Day Time, runs from the start of the day to the end of the day (which I’ve defined above as being between 8am and 8pm). The second schedule, Night Time, runs from the end of one day to the start of the next day. Both use the options in Time to define start and end of a day, and the time zone. This schedule respects daylight savings time, if it is applicable to that timezone. In the future we will add other types of time based schedules as well.

Once you’ve setup a day/night schedule, you can return to the Scale page and see that the schedule dropdown now has the two schedules you created populated within it:

image

You can now select each schedule from the list and edit scaling rules specific to it within it. For example, you can select the Day Time Schedule and set Instance Count on a Cloud Service role to 5, and then select Night Time and set Instance Count to 3.  This will ensure that Windows Azure scales up your service to 5 instances during the day, and then cycles them down to 3 instances overnight.

You can also combine Scheduled Autoscale rules and the Metric Based AutoScale rules together.  Select the CPU or Queue toggle and you can configure AutoScale rules that apply differently during the day or night. For example, you could set the Instance Range from 5 to 10 during the day, and 3 to 6 at night based on CPU activity.

Today’s release only supports Scheduled AutoScale rules on Cloud Services – but you’ll see us enable these with all types of compute resources (including Web Sites, Mobile Services + VMs) shortly.

AutoScale History

It’s now easy to know and log exactly what AutoScale has done for your service: there are four new AutoScale history features with today’s release to help with this.

First, we have added two new operations to Windows Azure’s Operation Log capability: AutoscaleAction and PutAutoscaleSetting. We now record each time that AutoScale takes a scale up or scale down action, and include the new and previous instance counts in the details. In addition, we record each time anyone changes autoscale settings – you can use this to see who on your team changed autoscale options and when.  These are both now exposed in the Operation Logs tab of the new Management Services node within the Windows Azure Management Portal:

image

For Cloud Services, we are also adding a historical graph that shows of the number of instances over the past 7 days. This way, you can see trends in AutoScale over the span of a week:

image

Third, if AutoScale ever fails for more than 2 hours at a time, we will automatically notify the Service Administrator and Co-Admin of the subscription via email:

image

Fourth, if you are the Account Administrator for your subscription, we will now show you billing information about Autoscale in your account’s currency:

image

If AutoScale is on, it will show you the difference between your current instance count, and the maximum instance count – and how much you are saving by using it.

If AutoScale is off, we will show you how much we predict you could save if you were to turn on AutoScale.  Put another way - we are updating your bill to include suggestions on how you can pay us less in the future (please don’t tell my boss about this… <g>)

Virtual Machines: Support for Configuring Load Balancer Probes

Every Virtual Machine, Cloud Service, Web Site and Mobile Service you deploy in Windows Azure comes with built-in load balancer support that you can use to both scale out your app and enable high availability.  This load balancer support is built-into Windows Azure and included at no extra charge (most other cloud providers make you pay extra for it).

Today’s update of Windows Azure includes some nice new features that make it even easier to configure and manage load balancing support for Virtual Machines – and includes support for customizing the network probe logic that our load balancers use to determine whether your Virtual Machines are healthy and should be kept in the load balancer rotation.

Understanding Load Balancer Probes

Load-balancing network traffic across multiple Virtual Machine instances is important, both to enable scale-out of your traffic across multiple VMs, as well as to enable high availability of your app’s front-end or back-end virtual machines (as discussed in the SQL Server AlwaysOn section earlier). A network probe is how the Windows Azure load balancer detects failure of one or more of your virtual machine instances - whether due to software or hardware failure.  If the network probe detects there is an issue with a specific virtual machine instance it will automatically failover traffic to your healthy virtual machine instances, and prevent customers thinking your application is down.

The default configuration for a network probe from the Windows Azure load balancer is simply using TCP on the same port your application is load-balancing.  As shown in the below example, each Virtual Machine in a load-balanced set is receiving TCP traffic on port 80 from the public internet (likely a website or web service). With a simple TCP probe, the load-balancer sends an ongoing message, every 15 seconds by default, on that same port to each Virtual Machine, checking for health. Because the Virtual Machine is running a website, if the Virtual Machine and web service is healthy, it will automatically reply back to the TCP probe with a simple ACK to the load balancer. While this ACK continues, the load-balancer will continue to send traffic, knowing the website is responsive. 

In any situation where the website is unhealthy, the load balancer will not receive a response from the website.  When this happens the load balancer will stop sending traffic to the virtual machine that is having problems, and instead direct traffic to the other two instances, as shown for Virtual Machine 2 below. This simple high availability option will work without having to write any special code inside the VM to respond to the network probes and can protect you from failure due to the application, the virtual machine, or the underlying hardware (note: if Windows Azure detects a hardware failure we’ll automatically migrate your Virtual Machine instance to a new server).

clip_image001[4]

Windows Azure allows you to configure both the time interval for sending each network probe (15 seconds is the default) and the number of probe attempts that must fail before the load balancer takes the instance offline (the default is 2). Thus, with the defaults, after 30 seconds of receiving no response from a web service, the load balancer will consider it unresponsive and stop sending traffic to it until a healthy response is received later (15 seconds per probe * 2 probes).

You can also now configure custom HTTP probes – which is a more advanced option. With HTTP probes, you can configure the load balancer’s network probe request to be sent to a separate network port than the one you are load-balancing (and this port does not have to be open to the Internet – the recommendation is for it to be a private port that only the load balancer can access). This will require your service or application to be listening on this separate port and respond to the probe request, based upon the health of the application. With HTTP probes, the load balancer will continue to send traffic to your Virtual Machine if it receives an HTTP 200 OK response from the network probe request. Similar to the above TCP intervals, with the defaults, when a Virtual Machine does not respond with an HTTP 200 OK after 30 seconds (2 x 15 second probes), the load balancer will automatically take the machine out of traffic rotation until hearing a 200 OK back on the next probe. This advanced option does require the creation of code to listen and respond on a separate port, but gives you a lot more control over traffic being delivered to your service:

clip_image001[6]

Configuring Load Balancer Probe Settings

Before today’s release, configuring custom network probe settings used to require you to use PowerShell, our Cross Platform CLI tools, or write code against our REST Management API.  With today’s Windows Azure release we’ve added support to configure these settings using the Windows Azure Management Portal as well. 

You can configure load-balanced sets for new or existing endpoints on your virtual machines.  You can do this by adding or editing an endpoint on a Virtual Machine.  To do this with an existing Virtual Machine, select the VM within the portal and navigate to the Endpoints tab within it.  Then add or edit the endpoint you want to open to external callers:

image

The Edit Endpoint dialog allows you to view or change a port that is open to the Internet (and existed before today’s release): 

image

Selecting the “Create Load-Balanced Set” or “Reconfigure the Load-Balanced Set” checkbox within the dialog above will now allow you to proceed to another page within the wizard that surfaces the load balanced set and network probe properties:

image

Using the screen above you can now change the network probe settings to be either TCP or HTTP based, configure which internal port you wish to probe on (if you want your network probe to be private and different than the port you use to serve public traffic), configure the probe interval (default is every 15 seconds), as well as configure the number of times the network probe is allowed to fail before the machine is automatically removed from network rotation (default is 2 failures).

Identifying Network Probe Problems

In addition to allowing you to create/edit the network probe settings, today’s Windows Azure Management Portal release also now surfaces cases where network probes are misconfigured or having problems.  For example, if during the Virtual Machine Preview you created a VM and configured a load-balanced sets prior to probes being a required configuration item, we will show an error icon that indicates missing probe configuration under the load-balanced set name column to indicate that the load-balanced set is not configured correctly:

image

Operation Logs and Alerts Now in “Management Services” section of Portal

Previously “Alerts” and “Operation Logs” tabs were under the “Settings” extension in the Windows Azure Management Portal.  With today’s update, we are moving these cross cutting management and monitoring functionality to a new extension in the Windows Azure Portal named “Management Services”. The goal is to increase discoverability of common management services as well as to provide better categorization of functionality that cuts across all Windows Azure services. We will continue to enrich and add to such cross cutting functionality in Windows Azure over the next few releases.

Note that this change will not affect existing alert rules that were previously configured, only the location where they show up in the portal is different.

image

Additions to Operation Logs

Prior to today, you could find operation history for Cloud Services and Storage operations. With this release, we are adding additional operation history data for the following additional areas:

  • Disk operations – add and delete Virtual Machine Disks
  • Autoscale: Autoscale settings changes, autoscale actions
  • Alerts
  • SQL Backup configuration changes

We’ll add to this list in later updates this year to include all other services/operations as well.

Summary

Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it.


Michael Washam (@mwashamtx)described Creating Highly Available Workloads with Windows Azure with the Windows Azure Management Portal on 8/13/2013:

imageIn a recent update to the Windows Azure Management portal the Windows Azure team has added the capability to create and manage endpoint probes. While this functionality has always been available it was restricted only to the Windows Azure PowerShell Cmdlets. Having this ability in the management portal is a huge improvement for those seeking every tool at their disposal for improved availability.

In this post I’m going to show how you can take advantage of this new functionality in combination with availability sets to create a true highly available workload – in this case it will be a highly available web farm.

In the Windows Azure Management Portal create a virtual machine, select the Windows Server 2012 image and on the page asking for Availability Set select Create an Availability Set.

create-av-set

Create Availability Set

Once selected type in a name for your availability set. The name I’ve chosen is WEBAVSET.

Specifying an availability set ensures that Windows Azure will put members of the same availability set on different physical racks in the data center. This gives you redundant power and networking. This also tells Windows Azure that while performing host updates to not take down all nodes in the set at the same time so some of the nodes for your application will always be up and running.

Note: The only way to achieve 99.95% SLA with Windows Azure is by grouping multiple VMs performing the same workload into an availability set.

create-av-set2

On the last screen of the portal you can skip creating the HTTP Endpoint here. You will configure the load balancer in a later step.

create-web-3.

Once the first virtual machine is completed provisioning create another virtual machine using the same base image.

On the screen where you are asked about cloud services use the drop down list to select the previously created cloud service.
Note: Selecting an existing cloud service is another huge improvement in the portal UI – long requested!

Select the availability set drop down and you should see the AV Set name you created with the first VM. Select this AV set before proceeding.

create-web2

Once both virtual machines are provisioned they should both be in the same cloud service and availability set.

Same Cloud Service (same host name)
same-cs

Same Availability Set
same-av-set

Next configure each virtual machine for a workload to load balance. To keep it simple RDP into each VM and launch Server Manager -> Manage Roles and Features and add the Web Server role (IIS).

Once the Web Server role is installed on each server, open notepad to edit the default page (c:\Inetpub\wwwroot\IISStart.htm) to show which server is serving up traffic to verify load balancing is working.

Add the following HTML code to the page replacing VMNAME with the name of the VM you are on:

   <h1>VMNAME </h1>

Edited Page for webvm1
edit-page

Now to add the load balanced endpoints. Go back to the Windows Azure Management Portal and under the first VM (webvm1 in my case) select Endpoints at the top.
Then click Add towards the bottom of the screen.

New Endpoint Wizard
create-ep

Select HTTP from the Drop Down and Check Create a Load Balanced Set then click the next arrow.

Creating a Load Balanced HTTP Endpoint
create-ep2

Default TCP Probe Settings

The default load balancer probe settings are set to TCP. What this screen means is every 15 seconds the load balancer probe will attempt a TCP connect on the specified probe port. If it does not receive a TCP ACK Twice (Number of Probes) it will consider the node offline and will stop directing traffic to it. This alone is a huge benefit Windows Azure is giving you for free that was previously only available via the PS Cmdlets.

create-ep3

Configuring HTTP Probe Settings

Changing the probe type to HTTP gives you a bit more flexibility and power on what actions you can take. You can now specify a ProbePath property on the endpoint. The ProbePath is essentially a relative HTTP URL on your web servers that will respond with an HTTP 200 if the server is fine and ANY other response if the node will be taken out of rotation. This allows you to essentially write your own page that can check the state of the VM. Whether this is verifying disk space, data base or Internet connectivity the choice is yours.

For my simplistic example I’m just going to point this to the root of the site / so as long as IIS returns a HTTP OK (200) the server should be in the load balanced rotation.

create-ep4

Adding an endpoint to an existing load balanced set

Next add the load balanced endpoint on the second virtual machine by opening the VM in the Windows Azure Management Portal, click Endpoints at the top of the screen and Add at the bottom.
This time instead of creating a new endpoint select “Add Endpoint to an existing Load-Balanced Set”.

add-ep5

On the next screen select HTTP for the name and click the checkmark to add the endpoint.

add-ep6

A few additional details to be aware of with endpoint probes:

  • If a node is taken out of rotation, once it becomes responsive again the load balancer will automatically add it back to rotation.
  • The load balancer accesses the probe port using the internal IP address of the VM and not the public IP of the cloud service. This means the probe port does not have to be the same port as defined in the endpoint.
  • There are no credentials to be passed along with the load balancer requests. The ProbePath property for HTTP probes should always point to a url that can respond successfully with no authentication (for those of you deploying SharePoint you already know this).
  • To troubleshoot a load balanced endpoint with HTTP probes always check the web logs of the servers. The load balancer requests are obvious and if you see response codes anything other than a 200 you know why the node is out of rotation.

Verification

Browse to the cloud service to ensure the load balancer is behaving as expected. If working (and you followed my post) you should see both VMs come through if you hit F5 a few times.

Response from VM1
verification1

Response from VM2
verification2

Now the important part: Ensure that the load balancer does not direct traffic to a node that is down.

Pick a VM and click the Shutdown button in the Windows Azure Management Portal (at the bottom of the page for the VM).

Stopped VM
stopped-vm

Once the virtual machine is stopped refresh the site multiple times. You should not see a timed out response or a response from the VM you shut down.

How to do the same thing in PowerShell

For those of you that may have the need to spin up workloads like this often it is probably worth investing the time in learning PowerShell.
Here is a quick example of how the above process can be packaged up in a small script to automate all of the above steps.
Note: if you are new to WA PowerShell take a look at the Windows Azure PowerShell reference guide (very incomplete but good starting point).

# Retreived with Get-AzureVMImage | select ImageName
$img = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201306.01-en.us-127GB.vhd"   $user = "mwasham"
$pwd = "some@pass123a1"   $vmname = "webvm"
$csname = "lbcloudsvc123a"   $av = "WEBAVSET"   $instances = 2
$vms = @()   # Loop to create N number of VMs
for($i=0; $i -lt $instances; $i++){   $vmInstanceName = "$vmname-$i"
 # Compose a VM config with the load balanced endpoint and the correct av set   $vms += New-AzureVMConfig -Name $vmInstanceName -InstanceSize Small -ImageName $img `
          -AvailabilitySetName $av |
          Add-AzureProvisioningConfig -Windows -AdminUsername $user -Password $pwd |
          Add-AzureEndpoint -Name "HTTP" -Protocol tcp -LocalPort 80 -PublicPort 80 `
            -LBSetName "LBHTTP" -ProbePort 80 -ProbeProtocol http -ProbePath "/" 
}     New-AzureVM -ServiceName $csname -Location "West US" -VMs $vms

Summary

In this post I showed how to take advantage of the new functionality in the Windows Azure Management Portal to add additional high availability to your web farm (or other multi-vm workload).


• David Linthicum (@DavidLinthicum) asserted “Some enterprises make all the right moves with cloud computing, but most are bound to stub their toes” in a deck for his 3 common mistakes in cloud migrations -- and how to avoid them article of 8/13/2013 for InfoWorld’s Cloud Computing blog:

imageThere are companies that succeed with cloud computing strategies and first-generation implementations. They typically have a few core characteristics, including the willingness to spend the necessary time for planning, the use of their best and the brightest, and being unafraid to make mistakes.

The proper use of cloud computing technology is not something you can find in a book. There is a bit of trial and error to the process, and you have to be willing to build this experimentation into your cloud implementation and migration processes.

imageHowever, many enterprises are not doing it that way. Instead, they are driven by forces that could move them to failure. Here are the top three problems I see in companies' current cloud adoption efforts:

1. They jump to the technology too fast
The most common mistake is to drive right to the "Puppet or Chef?" or "Amazon or Rackspace?" discussion before much is known about the core business requirements.

We love technology, and it's much more fun to talk about it than to talk about the business drivers and architectural planning. However, most companies leaping into the technology too quickly are likely to use cloud computing ineffectively.

2. They get involved in the cloud provider drama
We all know that the OpenStack camp doesn't like the Amazon crew, who doesn't like the CloudStack gang. What does this mean to you? Not much. As the blog wars continue, you should focus on the technology in terms of fit, function, and value -- not the hyped industry drama that continues to be a characteristic of the cloud computing market.

3. They focus to the wrong degree on security
Security seems to have two extremes in the world of cloud computing. Some businesses focus too much on cloud security, to the point of being paranoid. Thus, they spend more money than necessary, reducing the value of moving to the cloud and perhaps eliminating that value altogether.

On the other side of the spectrum, some companies spend too little time dealing with cloud computing security. They end up exposed, and their cost of risk rises significantly. You have to start from your requirements to assess right security.

The theme is simple: You should pay attention to your business requirements and use that to drive your technology, security, and other decisions. But the reality is often that business requirements remain disconnected from the "sexier" issues that get all the attention.


Nathan Totten (@ntotten) and Nick Harris (@cloudnick) released Cloud Cover Episode 111: New Windows Azure Diagnostics enhancements in SDK 2.0 and above to Channel9 on 8/12/2013:

imageIn this episode Nick Harris and Nathan Totten are joined by Boris Scholl, Senior Program Manager on the Visual Studio Azure Tools team. During this episode Boris demonstrates the Windows Azure Diagnostics tooling enhancements introduced in the Windows Azure SDK 2.0 for Cloud Services and Windows Azure Websites and above

Like Cloud Cover on Facebook!

Follow @CloudCoverShow
Follow @cloudnick
Follow @ntotten


Neil MacKenzie (@mknz) described the Semantic Logging Application Block (SLAB) in an 8/12/2013 post:

imageThe use of elastically-scalable cloud services makes it more important than ever that service be monitored appropriately so that diagnostic information is available when needed. This can be seen in the way that Windows Azure Diagnostics (WAD) is a core feature of Windows Azure, since WAD provides a unified way to capture and persist various types of diagnostic information.

imageFor many years, Windows has supported the tracing of applications through the Trace class in the Systems.Diagnostics namespace. This class exposes methods such as Write(), TraceInformation() and TraceError() that can be used to write events, which can then be captured and persisted by a listener. WAD, for example, provides a listener that persists these events to Windows Azure Tables.

The Trace class has a number of failings, including the absence of any structure to the captured information and the need to decide at the point of use information such as the severity (Warning, Error, etc.) of the event. This means that the same message could be used in different places with different severity levels increasing the complexity of basing decisions on a trace event.

The EventSource class, in the System.Diagnostics.Tracing namespace, was introduced in .NET 4.5 to support structured logging. Rather than use methods like TraceInformation() to trace events, a set of strongly-typed methods is used to trace events. These methods reside in an application-specific class derived from EventSource. The resulting events are processed using the Event Tracing for Windows (ETW) mechanism, a high-performance trace-logging system widely used by the Windows infrastructure. ETW maintains the strong-typing of events, making the event information much more useful to consumers. The PerfView utility can be used to view any ETW event generated on a system.

The Microsoft Patterns and Practices Enterprise Library v6 introduced the Semantic logging Application Block (SLAB) that provides a set of EventSource consumers (listeners) that can be used to persist EventSource events to various sinks including flat file, SQL Server and Windows Azure Tables. The latter could be used, for example, with an appropriately instrumented application deployed to Windows Azure Virtual Machines to persist trace information to a Table where it could then be processed without the need to access the compute instance.

Dominic Betts has an excellent Channel 9 video describing SLAB. Grigori Melnik has a post on Embracing Semantic Logging. Vance Morrison, who seems to have the goto blog for all things ETW, has a short introductory post on SLAB. Julian Dominguez did a presentation at Build 2013 on Creating Structured and Meaningful Logs with Semantic Logging. Mark Simms has a video on Design For Manageability, in the awesome Failsafe: Building Scalable, Resilient Cloud Services series, in which he provides motivation for instrumenting applications with more telemetry logging than you had likely dreamed of. Kathleen Dollard has a PluralSight course covering new features of .NET 4.5, including EventSource, which she also covers in this post.

The CTP version of the SLAB documentation can be found on CodePlex. The documentation downloads include a very readable Developers Guide (pdf) and an extensive Reference Guide (pdf). The preview version of the complete Developers Guide to Microsoft Enterprise Library documentation is downloadable from here.

The Enterprise Library v6 home page on CodePlex contains links to downloads and documentation. In v6, the various blocks have been partitioned so they can be installed independently of each other from NuGet. These include individual blocks such as the Transient Fault Handling Block and the Semantic Logging Application Block. There are three SLAB downloads on NuGet (search for “semantic logging application block”): the core SLAB functionality; a Windows Azure Sink; and a SQL Server Sink. The two sinks allow EventSource events to be persisted to Windows Azure Tables and SQL Server respectively.

EventSource

Event Tracing for Windows (ETW) is a kernel-level logging engine for Windows that has been around for several years seemingly without garnering much attention outside Microsoft. Many Windows Services use ETW for tracing. Indeed, Mike Kelly describes in an MSDN article how ETW is used by Windows Azure Diagnostics.

One reason for the obscurity of ETW was the difficulty of creating event sources. The .NET 4.5 EventSource  class makes it really easy to create event sources. This allows an event source to be created merely by deriving a new class from EventSource, creating the trace-writing methods, and exposing them through a singleton EventSource.

A single EventSource may have many trace-writing methods. These are defined on a separation of concerns basis. The parameter list, provided to the EventSource user, captures only the core information for the event. The filtering properties – level, keywords, etc. – for the event are specified at design time and are not specified by the EventSource user.

The following gist contains a simple example of an EventSource class:

using System;

using System.Collections.Generic;

[EventSource(Name = "MyDomain-MyEventSource")]

public class MyEventSource : EventSource

{

public class Keywords

{

public const EventKeywords Database = (EventKeywords)1;

public const EventKeywords ExternalApi = (EventKeywords)2;

}

 

public class Tasks

{

public const EventTask Timing = (EventTask)1;

}

 

[Event(1,

Message = "Method entry: {0}",

Level = EventLevel.Verbose)]

internal void MethodEntry(String message)

{

if (IsEnabled()) WriteEvent(1, message);

}

 

[Event(2,

Message = "External call to {0}/{1} - TimeSpan: {2}",

Level = EventLevel.Informational,

Keywords = Keywords.ExternalApi,

Task=Tasks.Timing)]

internal void ApiTiming(String apiName, String apiOperation, Int64 elapsedTimeMilliSeconds)

{

if (IsEnabled()) WriteEvent(2, apiName, apiOperation, elapsedTimeMilliSeconds);

}

 

[Event(3,

Message = "Invalid configuration entry: {0}",

Level = EventLevel.Warning)]

internal void MissingConfigurationEntry(String entryName)

{

if (IsEnabled()) WriteEvent(3, entryName);

}

 

public static readonly MyEventSource Log = new MyEventSource();

}

The EventSource attribute is used to provide an alternative name for the EventSource, otherwise the class name is used. This name needs to be unique across all systems so should include a corporate namespace, e.g., MyDomain-MyEventSource.

The Keywords and Tasks classes provide enumerations used initially to decorate event definitions and then to provide a structured filtering capability for ETW consumers. This allows the various events to be filtered by different listeners, e.g., a listener could listen only to Timing events for ExternalApi calls. EventKeywords and EventTask are enumerations, the values of which can be overridden. Another class, OpCodes, can also be used – along with an accompanying EventOpCode enumeration. The various enumerations are bitwise so that multiple Keywords, for example, can be aggregated for a single event. Note that, if used, the class names must be Keywords, Tasks and OpCodes.

Each event is decorated with an Event attribute, which provides schema information for the event. An event is identified by the EventId, a unique (1-based) sequential integer. The Message property specifies a string used to format the message text for the event. The Level property specifies the level (Informational, Warning, Error, ..) for the event. The Keywords, Task and Opcode specify values from the classes defined earlier.

The core of an event definition is the method used to record it. The method parameter list captures all the information for the event. It invokes a heavily overloaded WriteEvent() method to write the event. The first parameter to the WriteEvent() method is an integer which must be the same as the EventId, otherwise the event will silently not be written. The IsEnabled()method on the EventSource class can be used to avoid writing the event when the EventSource is disabled.

The EventSource class is completed by the creation of a singleton static instance used to expose the event-writing methods. This can be used to write an event as follows:

MyEventSource.Log.ApiTiming(
   “Twitter”,
   “GET statuses/home_timeline”,
   stopwatch.ElapsedMilliseconds);

This writes the event to the ETW log from where it can be retrieved by a listener. The event is written in structured form allowing it to be subsequently recreated in a type-consistent manner, which simplifies the subsequent analysis of the event. The event can contain a human-readable message as well as the actual data used to compose the message.

The PerfView utility can be used to view the events created by the EventSource. When initiating the data collection using PerfView, the name of the EventSource can be provided with a * prefix (as an Additional Provider in the Advanced Options) – i.e., *MyDomain-MyEventSource. PerfView is a powerful ETW utility that comes with extensive documentation showing how to use it for analyzing ETW event logs. These contain a wealth of information about a running system since many Windows subsystems contain ETW instrumentation.

This section is standard ETW that makes no use whatsoever of SLAB.

Semantic Logging Application Block (SLAB)

SLAB builds on standard ETW by providing in-process and out-of process listeners that persist an ETW EventSource to a one or more of:

  • Console
  • Flat file
  • Rolling file
  • SQL Server
  • Generic database 
  • Window Azure Tables

As of this writing, SLAB does not support persisting events to the Windows Event Log. Note that the use of SQL Server/Generic database or Windows Azure Tables requires the appropriate supplementary NuGet SLAB download.

SLAB provides source code which can be used as a base for developing additional listeners. It also exposes extension points supporting the customization of message formatting.

In-Process

The use of an in-process listener requires the creation and enablement of an EventListener. For example,

EventListener consoleEventListener = ConsoleLog.CreateListener();

consoleEventListener.EnableEvents(
   MyEventSource.Log,
   EventLevel.LogAlways,
   Keywords.ExternalApi);

In this example, MyEventSource is the class name for the event source and the listener is filtered to only persist events defined with the ExternalApi keyword. When no longer needed, the EventListener can be disabled and disposed as follows:

consoleEventListener.DisableEvents(MyEventSource.Log);
consoleEventListener.Dispose();

And that is all that is required to host an in-process SLAB event listener. The various listeners provided by SLAB are all used as above with the proviso that some configuration is required, such as the filename for a file listener or the connection string for a Windows Azure storage account.

Out-of Process

SLAB provides a separate download – SemanticLogging-svc.exe – that can be run either as a console application or a Windows Service. The various SLAB listeners can be configured, in SemanticLogging-svc.xml, to persist the EventSource events to one of the supported sinks – the same list as for in-process. The advantage of doing this out-of-process is that the monitored application has no SLAB dependency – since EventSource is a a pure ETW feature – and does not suffer any performance degradation from the in-process use of SLAB.

The out-of-process configuration requires the specification of the EventSource and one or more listeners to it. The following example shows the SemanticLogging-svc.xml configuration of a Windows Azure Sink that persists all events, of Verbose or higher level from an event source named MyEventSource, to the local storage emulator:

<eventSource name=”MyEventSource”>
   <eventListeners>
      <eventListener name=”azureTable” level=”Verbose” />
   </eventListeners>
</eventSource>

<eventListeners>
   <azureTableEventListener name=”azureTable”
      instanceName=”myInstanceName”
      connectionString=”UseDevelopmentStorage=true”/>
</eventListeners>

Windows Azure Table Listener

The Windows Azure Table Listener persists events to a Windows Azure Table. By default, the table is named SLABLogsTable and the data is persisted every 10 seconds. The listener stores the following properties:

  • PartitionKey
  • RowKey
  • Timestamp
  • EventId
  • EventDate
  • Keywords
  • EventSourceGuid
  • EventSourceName
  • InstanceName
  • Level
  • Message
  • Opcode
  • Task

The listener takes advantage of the schema-less nature of Windows Azure Tables to also store the actual parameters provided when the event was created. The PartitionKey is generated as a per-minute bucket using DateTime.Ticks. The RowKey comprises the instance name, the Ticks count for the event, and an appended salt to guarantee uniqueness. By default, the PartitionKey and RowKey tweak the Ticks count so that the data is stored in reverse chronological order.

Summary

ETW provides a powerful way to trace applications in a structured way. The EventSource class, new in .NET 4.5, makes it really easy to use ETW. The Semantic Logging Application Block provides a number of listeners that can be used to consumer EventSource events and persist them to various locations, including Windows Azure Tables.


<Return to section navigation list>

Windows Azure Pack, Hosting, Hyper-V and Private/Hybrid Clouds

• Kristian Nese (@KristianNese) described Getting started with Gallery Items in Windows Azure Pack (WAP) in an 8/14/2013 post:

imageI’ve been diving into Windows Azure Pack lately, to explore some of the cloud characteristics this solution will bring to your organization together with System Center 2012 R2 (SCVMM, Orchestrator and SPF).

Recently, Microsoft announced some cool stuff on their codeplex (community) site.

During TechEd, you may have seen the presentation by Eric Winner and Marc Umeno on the subject, and where they talked about gallery items in Windows Azure Pack.

What is gallery items?

Gallery items in Windows Azure Pack is a set of predefined services that you can offer to your tenants.

Interesting is the relation to service templates in VMM with the design, but currently they are very different.

A service template in VMM can be authored with both PowerShell and the console, and is still the most flexible and powerful solution. However, service templates isn’t exposed to the tenant API/portal in Windows Azure Pack.

Hence, we get gallery items.

The story has been clear by now. This R2 release is a result of huge investments in Windows Azure and Microsoft are first building for the cloud (Azure) and then for bits you are able to purchase and run on your own. Gallery items is basically “service templates” that is built to serve a service, like web server, application server and any other server role/application.

If you have little or none experience with Windows Azure, please continue to read where I will try to explain in a bit more detail.

Windows Azure when first released, was all about Platform as a Service. This service model (referring to the definition of cloud computing) is basically based to provide developers with a scalable framework, where they can write their code, upload the code and packages to Azure, where Microsoft’s high-tech datacenters is able to execute the application in an architecture where everything is loosely coupled.

Personally, this is perhaps the most interesting service model as it ‘forces’ you to modernize you applications to fit into this model. If you are looking for a place to run highly scalable internet application, Windows Azure was a very good option back in 2008-2010.

We now got some new services in Windows Azure, and we can leverage the more traditional Infrastructure as a Service – service model. This gives us virtual compute, virtual networks, virtual storage and virtual machines that we can manage as they were running on-premise.

As Infrastructure as a Service was introduced back in 2011, we saw some changes to the Platform as a Service mode, or to be more precisely, we got something called ‘Cloud Services’.

Cloud Services was either a worker role, web role or a virtual machine role.

Together with traditional virtual machines, we now had options when creating applications and services for the public cloud.

Back to Windows Azure Pack.

In Windows Azure Pack, we can create traditional virtual machines (infrastructure as a service) together with virtual networks. All of this are running on Windows Server 2012 R2 (Hyper-V) and System Center 2012 R2 (SCVMM, Orchestrator with SPF).

New in this release, is support for both Service Bus and Virtual Machine Roles. Both of these are related to platform as a service, and we are now focusing on Virtual Machine Roles.

The gallery items are the building blocks for your virtual machine roles.

Let’s explore this and see how we can get things running in our cloud (either private cloud or service provider cloud).

Download gallery items from Codeplex

A few sample gallery items are now available in the Web Platform Installer now:

  1. Install WebPI: http://www.microsoft.com/web/downloads/platform.aspx
  2. Click the "options" link at the bottom of the WebPI UI.
  3. In the custom feed field, enter the following URL: http://www.microsoft.com/web/webpi/partners/servicemodels.xml.
  4. Click "Add Field" and dismiss the dialog.

Please note that only the three Windows Server 2012 * resources are related to gallery items. Both Service Template Example Kit and Sharepoint 2013 Service Template are only suited for service templates in SCVMM.

Once downloaded, we can navigate to the folder we placed it into and see the items. Included with every resource, we have a readme file.

Note: there are some important steps missing in the readme file to get this working, so pay attention to the instructions later when importing and customizing the resources in the SCVMM library.

How to import and use Windows Server 2012 R2 Web Server Gallery Resource

  1. In order to publish the gallery resources as a gallery item, you must
  2. Import the resource extension package into System Center Virtual Machine Manager.
  3. Ensure the virtual hard disks in SCVMM are properly prepared and have all the necessary properties set.
  4. Import the resource definition package as a gallery item.
  5. Make the gallery item public.
  6. Add the gallery item to a plan.

1) Import the recourse extension package into System Center Virtual Machine Manager

Using Powershell, you must import the resource extension package into the virtual machine manager library.

Sample Windows Powershell:

$libsharepath = <you must set the library sharepath from your environment>

Example: $libsharepath = “\\vmmserver\library\”

            $resextpkg = $Env:SystemDrive + “\GalleryResources\WS2012WebServer-VMRole-Pkg\WS2012WebServer.resextpkg”

Import-CloudResourceExtension –ResourceExtensionPath $resextpkg –SharePath $libsharepath-AllowUnencryptedTransfer

The import can only be done using Powershell.

To verify the import, run the get-CloudResourceExtension Powershell command and locate the newly imported extension.

      Get-CloudResourceExtension

2) Prepare the virtual hard disk

Since you have landed on this blog, I already assume you are familiar with sysprep and how to take action on this, either manually or by using SCVMM.

You must provide a virtual hard disk from which the virtual machine role will be created. If you already have a vhdx file in your library, go ahead and use this.

Note: to actual get this working, you must have two disks in your library. One disk containing the operating system, and one disk for the data partition. You only have to prepare the partition used for the operating system in this guide. The disk for data partition will be explained in a bit.

Since the resource extension will only work with Windows Server 2012 /R2, use one of the following operating system values on your Windows Server 2012/R2 hard disk:

  • 64-bit edition of Windows Server 2012 Datacenter
  • 64-bit edition of Windows Server 2012 Standard
  • 64-bit edition of Windows Server 2012 Essentials
  • Windows Server 2012 R2 Datacenter Preview
  • Windows Server 2012 R2 Standard Preview
  • Windows Server 2012 R2 Essentials Preview

Sample Powershell

$myVHD = <you must set to the virtual hard disk in your environment>

Example: $MyVHD = get-SCVirtualHardDisk –id “your virtual hard disk ID”

$WS2012R2Datacenter = Get-SCOperatingSystem | where { $_.name –eq “Windows Server 2012 R2 Datacenter Preview” }

Set-scvirtualharddisk –virtualharddisk $myVHD –OperatingSystem $WS2012R2Datacenter

The Operating System value can be set using Powershell or the virtual machine manage administrator console.

3) Familyname and Release

These properties must be set in order for the Windows Azure Pack portal to display the virtual hard disk as an available disk for this gallery resource. The Familyname and Release properties are shown in the portal drop-down list, so set them to values that will make sense to your user.

Familyname property values should indicate the contents of the virtual hard disk, including the Windows Server release and edition.  For this gallery resource, you should consider the following Familyname values.

  • Windows Server 2012 Datacenter
  • Windows Server 2012 Standard
  • Windows Server 2012 Essentials
  • Windows Server 2012 R2 Datacenter Preview
  • Windows Server 2012 R2 Standard Preview
  • Windows Server 2012 R2 Essentials Preview

Release property values must conform to the Windows Azure versioning scheme of n.n.n.n:

  • 1.0.0.0
  • 1.0.0.1

etc

Sample Powershell

$myVHD = <you must get the virtual hard disk in your environment>

Set-SCVirtualHardDisk –VirtualHardDisk $myVHD –FamilyName “Windows Server 2012 R2 Datacenter Preview” –Release “1.0.0.0”

Familyname and Release values can be set using Powershell or the virtual machine manage administrator console.

Note: repeat these steps on your virtual hard disk for the data partition. The important thing to note here is that you must not define any operating system on this disk, as the portal then will consider it to contain the operating system and hence not list it in the data partition field in the portal.

4) Tags

The Windows Server 2012 gallery resource depends on a virtual hard disk with the following tags

  • WindowsServer2012
  • .NET3.5

NOTE: this .NET3.5 tag indicates that you have pre-installed .NET3.5 in your sysprepped VHD.

Sample Powershell

$myVHD = <you must set to the virtual hard disk in your environment>

$tags = $myVHD.Tag

if ( $tags -cnotcontains "WindowsServer2012" ) { $tags += @("WindowsServer2012") }

if ( $tags -cnotcontains ".NET3.5" ) { $tags += @(".NET3.5") }

Set-SCVirtualHardDisk –virtualharddisk $myVHD –Tag $tags

The tag property can only be set using Powershell. 

5) Windows Azure Pack Service Administrator Portal

Once the resource extension and virtual hard disk are all correctly set in SCVMM, you can import the resource definition package using the Service Administrator Portal in the Windows Azure Pack.

  1. Open the Service Admin Portal.
  2. Navigate to the VM Clouds workspace.
  3. Click the Gallery tab.
  4. Click Import.

Select and import the WebServer(IIS).resdefpkgfile in the unzipped location.  The default unzip location is “c:\GalleryResources\WS2012WebServer-VMRole-Pkg\”

Note that the gallery item now is listed on the Gallery tab.

Now that the packages for the Virtual Machine Role have been installed, you can publish the gallery item to make it available to tenants.

To make the Virtual Machine Role available to the tenant, you need to add it to a plan. In this procedure, you publish the Virtual Machine Role that you installed.

  1. On the Gallery tab, select the version of the gallery item that you just imported.
  2. Click the arrow next to the gallery item name.
  3. Explore the details of the gallery item.
  4. Navigate back and click Make Public.
  5. Select the Plans workspace in the Service Admin Portal.
  6. Select the plan to which you want to add this gallery item.
  7. Select the Virtual Machine Clouds service.
  8. Scroll to the Gallery section.
  9. Click Add Gallery Items.
  10. Select the gallery items that you imported, and then click Save.

Brilliant, we are almost done.

The last thing to do, is to create a new tenant, or logon with an already existing tenant to this portal.

The tenant must then subscribe to a plan that is offering these gallery items.

Here’s some screen shots on how to deploy a gallery item into a cloud defined in SCVMM, presented by Windows Azure Pack:

6) Deploying Virtual Machine Roles in Windows Azure Pack

In the portal, click new à Virtual Machine Role à From Gallery.

This will bring up the available gallery items.

In the ‘Create Virtual Machine Role from ...’ screen, please select the proper item. In my case, I have both a web server and a stand-alone Windows Server 2012 R2 resource. I will select my Web Server and proceed.

Assign the virtual machine role with a name (during this process, Windows Azure Pack will check with SCVMM if the name is available or already taken).

Select the right version and the right hosting plan. If the gallery item is not available in a hosting plan, you are unable to proceed.

The next step will require some input from the tenant.

You can define the following:

Size

Choose the size of the instance. Extra small, small, Medium, Large, Extra large.

Operating system disk

The disk you prepared with powershell should be available here

Data disk

The other disk (containing no operating system, remember?) is listed here

IP Address allocation method

Dynamic or static is the option here

IP Address type

IPv4 or IPv6

Logical Network

The networks you have made available both in the cloud in SCVMM and in the plan is available here. I would strongly suggest you to leverage network virtualization in this case, and provide the tenants to create their own virtual networks prior to this, and deploy the virtual machine role to this network.

New user name

Specify the username

New Password

Assign a password to the user

Confirm

Confirm your password

Virtual Machine Name Pattern

Default, you will se ‘Computer###’ where the hashes refers to incremental numbers.

Workgroup

Name of the workgroup this virtual machine role should be a part of

Time Zone

Choose the proper time zone for your virtual machine role

Initial Instance Count

How many virtual machines will you deploy at first? This is where you define it

Minimum Instance Count

What’s the minimum instances of the virtual machine role

Maximum Instance Count

Decide how many instances this virtual machine role can scale out to.

Click next to proceed

In this screen, you can assign website name and application pool together with your preferred TCP port.

This is because we are deploying a web server virtual machine role. Once you are done, click finish to start the deployment

Note: if your cloud in SCVMM has any capability profiles associated, the deployment will fail.

You must uncheck any capability profiles since gallery items doesn’t have this property.

In the portal, we can now see that the virtual machine role is being provisioned.

Since I am the SCVMM admin as well, I can check in the Jobs view in the console, that some cool stuff are actually taking place in my environment.

Once the deployment has succeeded, you can manage it further in the tenant portal.

This screen shots illustrates that I am able to scale my instances for this virtual machine role.

Hopefully this was useful to get you started with gallery items in Windows Azure Pack.


• Chris Avis (@chrisavis) posted VMware or Microsoft: Simplified Microsoft Hyper-V Server 2012 Host Patching = Greater Security and More Uptime on 8/14/2013:

imageMany IT Pros still don’t know that Microsoft offers a bare metal hypervisor. Microsoft Hyper-V Server 2012 installs directly on your hardware with a very minimal set of Windows Server components to optimize the virtualization environment. This Hyper-V platform eliminates many of the common Windows Server infrastructure features such as Active Directory, DNS, IIS, DHCP, and more. Below you can see a comparison between the Add Roles and Features Wizards for a Windows Server 2012 and Windows Hyper-V Server 2012.

image

Because the code doesn't even exist on the platform, there is a significantly reduced attack surface that enhances security. Combine this with built in BitLocker support, Microsoft Hyper-V Server 2012 is an excellent, secure solution for remote sites where there may not be the same level of physical security. VMware has no capability within the vSphere Hypervisor that can enable the encryption of either VMFS, or the VMDK files themselves. Instead, they rely on hardware-based or in-guest alternatives, which add cost, management overhead, and additional resource usage.

More importantly, there is typically very little to patch on Patch Tuesday. For instance, if there is a critical Windows DNS patch that requires a reboot, it simply does not apply to Windows Hyper-V Server. The result – a significant reduction in host downtime which means the guest workloads don’t have to be migrated or incur any downtime while the host is rebooted. In the essence of transparency – we are not perfect. There are patches that will require a Hyper-V host to be rebooted (here is a KB article for Hyper-V 2012 specific patches). However, in the event there is a patch that requires a reboot of the host, Microsoft Hyper-V Server 2012 allows you to migrate workloads to other Hyper-V servers or to leverage a replica VM while a host is being rebooted. Something the free VMware offering specifically doesn’t support. To get this for VMware you must purchase the much more expensive VMware offering. I like free!

image

But when you consider that a patch reboot is a relatively small part of what goes on in production, I feel the absolutely most important aspect of this is reduced resource usage by the host itself. Ideally, you want any hypervisor used in productions to consume as little resources at the host level as possible leaving as much as we can for the VM’s we are hosting. Microsoft Hyper-V Server accomplishes this by eliminating the code for extraneous services completely.

Microsoft Hyper-V Server 2012 doesn’t compromise on any Hyper-V Features either. Even though this is an absolutely free hypervisor, it fully supports all of the same enterprise feature sets of a Windows 2012 Server with the Hyper-V role enabled.

image

This contrasts the free VMware vSphere Hypervisor offering that cripples some features such as moving running workloads easily to another VMware server, lack of high availability features, and a cap on the VMware host of 32 gigs of installed memory (this is a hard cap too the VMware license key will not be accepted if the host has >32 gigs of memory installed!).

image

Finally, we aren’t finished innovating in the bare-metal virtualization space. Windows Hyper-V Server 2012 R2 is just around the corner and it boasts new updates and features to further enable IT Administrators to optimize their virtualized environments and reduce costs.

image

If you want to take a look at some of the new features, download the Windows Hyper-V Server 2012 R2 Preview here -


The Microsoft Server and Cloud Platform Team (@MSCloud) described What’s New in 2012 R2: Hybrid Networking Innovations on 8/14/2013:

imageNetworking is at the core of everything most of us use in the office or at home.  Today billions of connections are made with smartphones, slates, notebook computers and servers.  Network diversity is a necessity to support a Cloud OS vision and all of the customer and partner requirements.

This week Microsoft VP Brad Anderson details the network innovations in, “What’s New in 2012 R2:  Hybrid Networking”.  There are a number of interesting customer and partner scenarios detailed in this post should you’ll get a sense for the different possible network topologies, security, encryption, and management of the configurations.

These capabilities broadly fall under three specific areas we’ll examine today:

  • Cloud connectivity
  • Network virtualization
  • Network infrastructure management

Windows Server 2012 R2 and System Center 2012 R2 provide a set of advanced capabilities for service providers to implement hybrid networking cost-effectively, reliably, and at scale. This includes multitenant S2S connectivity, NAT, and remote access VPN. In conjunction with the Windows Azure Pack, SCVMM, and PowerShell scripting – service providers can easily automate the on-boarding of customers, as well as set up and manage all hybrid networking functions.

And for those of you interested in downloading some of the products and trying them, here are some resources to help you:

  • Windows Server 2012 R2 Preview download
  • System Center 2012 R2 Preview download
  • SQL Server 2014 Community Technology Preview 1 (CTP1) download
  • Windows 8.1 Enterprise Preview download

As always, follow us on Twitter via @MSCloud!  And if you would like to follow Brad Anderson, do that via @InTheCloudMSFT !


The Microsoft Server and Cloud Platform Team (@MSCloud) reported Windows Server 2012 R2, System Center 2012 R2, and Windows Intune Update Available October 18th in an 8/14/2013 post:

imageToday, Microsoft is excited to announce that on October 18th, eligible customers will able to download Windows Server 2012 R2, System Center 2012 R2, and use the latest update to Windows Intune. Also on this same day, Windows 8.1 will be available to consumers and businesses worldwide.  You can find more details on this news on Microsoft Vice President Brad Anderson's blog, "Mark Your Calendars for October 18th, the R2 Wave is Coming".

So mark the date in your calendars because there is a lot to get excited about! In the meantime, you can download the preview bits and learn more about these upcoming releases in Microsoft Vice President Brad Anderson’s special blog series on “What’s New in 2012 R2” currently underway.

Here’s what you need to get started.  You can download the latest previews below:

And for more of the latest news on these upcoming releases, follow us on Twitter via @MSCloud, and follow Brad @InTheCloudMSFT!


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Matt Thalman described How to Get a Stack Trace in LightSwitch in an 8/16/2013 post:

imageA lot of times LightSwitch customers will run into an error in the product and post a question on the forums but the LightSwitch development team needs more information about the error.  This blog post describes how you can help the LightSwitch development team determine the source of those kinds of errors you may encounter.  Unexpected errors (or exceptions as we call them) within the product can manifest themselves either by an error message, a crash, lack of response (hang), or when an expected behavior doesn’t occur.  By providing an exception stack trace in your forum posts, it provides valuable information to the development team and helps us help you.

Exception Lead-In

imageBefore beginning the process of debugging, you’ll want to follow the necessary steps in Visual Studio or your LightSwitch app that reproduce the exception but stopping just prior to the last step that actually causes the exception.  If you attach and debug prior to all these steps you may encounter a bunch of exceptions that are unrelated to the issue or performance of the process may be degraded enough that will make you annoyed.  So just get to the point right before the exception occurs and then follow the instructions below.

Attaching to the Process

The next step is to identify which process to debug.  This handy flowchart can help you determine which process you should be attached to in order to retrieve the exception stack trace:

image

Note that if the issue you’re investigating occurs within your running application while you are running it from within Visual Studio via F5, Visual Studio will automatically be attached to both the client and server processes.  In all other cases, you’ll need to manually attach to the process.  Of course, if the issue occurs within Visual Studio itself, you’ll need to launch another instance of Visual Studio to attach to the original instance since a given VS process cannot attach to itself.

To manually attach to the process, go to “Tools –> Attach to Process…” in Visual Studio.

image

In the dialog that opens, select the name of the process as indicated by the flowchart above.  This will only allow you to attach to the process if it is running on the local machine.  If you want to attach to a remote process (for example, attach to the IIS process on a remote machine), you’ll need to follow the instructions in this MSDN article: Setup Remote Debugging.  If the process is hosted on a machine you don’t have access to, like Azure, you won’t be able to follow the debugging instructions described in this post.  Instead, you’ll need to use diagnostic tracing as described in this post: Diagnosing Problems in a Deployed 3-Tier LightSwitch Application.

Configure for Debugging

Now that Visual Studio is attached to the process, it needs to be properly configured to break on exceptions in order to see the stack trace of the exceptions.

The first thing to do is to ensure that VS is configured to debug code that you don’t own.  Visual Studio has a feature called “Enable Just My Code” that is enabled by default which prevents you from debugging code that isn’t yours.  To turn it off, follow these steps:

  1. Go to Tools –> Options.
  2. If necessary, click the “Show all settings” check box at the bottom of the dialog if the “Debugging” node doesn’t show up in the tree.
  3. In the Options dialog, navigate to Debugging –> General in the tree.
  4. Ensure that the “Enable Just My Code” check box is not checked.

image

The next thing is to configure VS so that it will break when an exception is thrown.  To do this, open the Exceptions dialog: Debug –> Exceptions.  In the Exceptions dialog, find the category of exceptions you want to break on and check its check box in the “Thrown” column.  For server and Silverlight client debugging, you’ll want to use the Common Language Runtime Exceptions; for HTML clients, you’ll want to use the JavaScript Runtime Exceptions.

image

This will break on all exceptions of that category.  A lot of times there will be exceptions thrown that are irrelevant to the actual issue.  These are normal and are handled by the LightSwitch code.  If you know the specific exception type that you want to break on, you can configure this dialog to break only on that exception type by drilling into the Common Language Runtime Exceptions node and finding the exception or by clicking the Find button and searching for it that way.

Reproduce the Exception

You’re now ready to do the final step in Visual Studio or your LightSwitch app that actually causes the exception to occur.  Once you do that, you’ll see an exception message appear in Visual Studio that looks like this:

image

Or like this if you’re debugging an HTML client:

image

As mentioned earlier, there can be exceptions that are thrown that are properly handled by LightSwitch and irrelevant to your actual issue.  You’ll want to work with someone on the LightSwitch development team via the forums or e-mail to determine which exception is the one that is relevant.  If you know the exception is not relevant, you can continue execution by clicking the “Continue” button.

Collect Exception Information

Once you’ve found the exception, click the “Copy exception detail to the clipboard” in the exception window if it’s a .NET exception and paste the result into your favorite text editor.  If it’s a JavaScript exception, there won’t be a link to do this so just skip that step.  Now, you’ll still need to collect a little more information since that exception detail won’t include a detailed stack trace.  Click the “OK” or “Break” button in the exception window to dismiss the window.  Open the Call Stack window in Visual Studio: Debug –> Windows –> Call Stack.  Select all of the lines in it (Ctrl+A) and copy it to the clipboard (Ctrl+C).

image

Paste that text along with the other exception detail you collected into the text editor.  You’ve now collected enough information to pass along to the LightSwitch development team.  Of course, along with the exception information, you should still describe the set of steps that reproduce the issue.

HTML Clients

If you’re debugging a JavaScript exception, the stack trace will probably have a lot of single letters for function names.  This happens because you’re app is configured to use the minified versions of the runtime JavaScript files.  To provide a more informative stack trace, change your default.htm file of your HTML client and remove “.min” from all of the filenames of the referenced scripts.

image

Then follow the steps to reproduce the issue again.  This time it will provide the friendly function names in the stack trace which is much more useful.  Be sure to revert your changes to the default.htm file when you’re done debugging.


Raghuveer Gopalakrishnan described Upgrading your LightSwitch projects in an 8/12/2013 post to the LightSwitch Team blog:

Introduction

image_thumb1211_thumbIn this blog post, we will look at how to upgrade your existing LightSwitch applications to Visual Studio 2013 Preview. In recent blog posts, we introduced some of the cool features available in preview like Enhancements in Solution Explorer, partitioned model files that improves experiences around Team Collaboration on LightSwitch projects, Intrinsic Database Management With Database Projects etc. You can take advantage of all these features by migrating your existing LightSwitch projects forward by means of project upgrade.

Upgrade Experience

For projects created with Visual Studio 2012, you will automatically receive a prompt for upgrade on opening the project on a machine with Visual Studio 2013 preview.

image

Click OK to begin the Project Upgrade process. Once the upgrade process is completed, you will automatically get the migration report in the browser as well as upgrade logs in the same folder on disk.

image

Upgrade Details

During Upgrade, the root, client and server projects will undergo a series of migration steps to move your project to latest version. Below, you will find a list of changes that occur during upgrade process

image

If your project is under Source Code Control, then after upgrade, you will find partitioned files for the model included under pending changes along with project files. These files get checked in during the check-in process.

Extensions and Upgrade:

If the LightSwitch project references any custom/third party extensions, these extensions are not automatically upgraded when project upgrade occurs. Many of the popular LightSwitch extensions now support Visual Studio 2013 Preview and 3rd party developers are continuing to add support for more.

Notes:

1. Before Upgrade starts, a backup of the current project is made and stored in Backup folder located at the same level in the project folder as the solution file (.sln). The user file settings (.suo file) is also backed up.

2. It is a 1 way project upgrade. This means that after upgrade, the project cannot be opened with previous releases of Visual Studio.

If you change your mind after upgrade and want to move back to the previous version of the project, you can open the project (not upgraded) in Backup folder using Visual Studio 2012.

Known Issues:

Please refer to Release Notes for Visual Studio 2013 Preview for list of any known issues around project upgrade.

Conclusion:

By upgrading your project to the latest preview version, you can utilize all the cool features and enhancements that were announced during the preview release. We would love to hear about your experience in upgrading your existing projects to the latest version of Visual Studio. Please let us know your feedback via LightSwitch forum or by posting a comment below.


<Return to section navigation list>

Cloud Security, Compliance and Governance

Richard Santalesa reported Ponemon’s Cyber Insurance Study Finds Companies Neglecting Coverage in an 8/14/2013 article for the @InfoLawGroup blog:

imageThe challenges of managing corporate risk – whether through the growth of formal “GRC” (governance, risk management and compliance) programs or through contractual liability transfers – increase each year. However, a recent Ponemon Institute study, Managing Cyber Security as a Business Risk: Cyber Insurance in the Digital Age, released Aug. 7, 2013 (available here: http://www.experian.com/managingcybersecurity)(the “Study”) reveals that companies have neglected sourcing cyber security insurance, even though ranking cyber security risks as either an equal or worse financial threat than natural disasters and other major traditional business risks.

imageAccording to the Study, only 31 percent of the risk management professionals at the companies surveyed report having “cyber risk” insurance coverage in place today, despite the fact that (as detailed in a different Ponemon study, the 2013 Cost of Data Breach Study) the average cost per each lost or stolen data record was $188 in 2012 and the average financial impact per security incident totaled $9.4 million – a potentially crippling or fatal sum for small to medium-sized businesses, and one that can, obviously, vary greatly depending on the amount of data affected, the sensitivity of data content and response handling effectiveness.

image_thumbSome of the Study’s findings, which include notable positive trends, are:

  • Overall concerns about cyber risks and the financial and other impacts have spread beyond corporate IT. Thankfully.
  • Among study respondents without cyber insurance, 57% indicated an intent to obtain coverage in the future, while 70% (not surprisingly) became interested in investigating cyber insurance after experiencing a data security incident.
  • Premium costs, range of exclusions, restrictions and defined uninsurable risks were the top reasons for not purchasing cyber security insurance (although 62% of those who have obtained coverage believed premiums were “fair” given the nature of the risks involved).
  • A majority of companies believe that their “security posture” overall is strengthened after obtaining cyber risk insurance, in part due to the assessments and other required steps underwriters require as part of policy issuance.
  • A large number of respondents rated insurer responsiveness to data incident claims as either very good or excellent.
  • Primary purchasing evaluation and decision making in selecting and obtaining cyber risk policies is typically handled, according to the Study, by risk management teams, compliance leaders or the CSO/CISO – with secondary input from general counsels, CFOs and other C-Level or business unit executives.
  • General agreement that cyber risk policies typically cover the “most common and costly incidents”, which the study detailed as including human error, negligence, external attacks, system/BP failures and insider acts and omissions. Notably, however, only 11% of respondents stated their coverage protects against “attacks against business partners, vendors or other third parties that have access to the company’s information assets” – a crucial issue to consider in drafting and negotiating any IT-related services agreement.
  • Significantly, the majority of policies held by respondents now cover notification costs to data breach victims, legal defense costs and forensics and investigative costs. 46% reported their policy also includes coverage for regulatory penalties and fines. Much less common were coverage for brand/reputational damage control costs, employee productivity losses, third-party liability or revenue losses.

Although perceived as a “niche” product a few years ago, cyber risk insurance is clearly increasingly perceived as coming into its own as one important arrow in the risk management quiver. The Ponemon Study reveals interesting developments and trends in the cyber risk market and should provide companies still on the cyber risk insurance fence with thought provoking information to consider.

To discuss the Study, cyber risk insurance or risk management programs feel free to contact me or any of the attorneys here at the InfoLaw Group, LLP.


<Return to section navigation list>

Cloud Computing Events

The Swedish Azure Group (SWAG) will present the Cloudburst 2013 Developers Conference in English on 9/19 and 9/20 at the Microsoft offices in Akalla, a Stockholm suburb:

image


Alan Smith, Magnus MÃ¥rtensson and Jesper Zachrisson will hold a Successful cloud solutions in practice - Seminar at Active Solution, Stockholm on 9/28/2013 at 11:30 AM to 12:45 PM:

imageJesper Zachrisson, Magnus MÃ¥rtensson and I will be presenting a seminar at Active Solution on Wednesday 28th September 11:30 – 12:45.

“Active Solution har satsat hÃ¥rt inom molnet i allmänhet och Windows Azure i synnerhet. Olika typer av mer allmänna molntjänster som exempelvis mail är idag ett vanligt inslag hos de flesta företag. Active Solution använder molnet främst till lösningar som är unika för ett visst företag. Här har utvecklingen gÃ¥tt lÃ¥ngsammare men det finns samtidigt mer att vinna i mÃ¥nga fall.

Vi har bara under det senaste Ã¥ret jobbat i ett 15-tal uppdrag med kundunika lösningar. Vi ser att molnet används allt flitigare bland "IT-tunga" företag men att det är lÃ¥ngt kvar för mÃ¥nga "vanliga" företag och organisationer. Ursäkterna eller förklaringarna är inte alltid hÃ¥llbara. Det tycker vi är fel och vill med det här seminariet visa hur andra gör.”

The event is free to attend. If you would like to attend you can register here.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

‡ Dave Barth, a Google Product Manager, reported Google Cloud Storage now provides server-side encryption in an 8/15/2013 post to the UK Google Cloud Platform blog:

This post has been updated to more accurately reflect details of the encryption process.

imageWe know that security is important to you and your customers. Our goal is to make securing your data as painless as possible. To help, Google Cloud Storage now automatically encrypts all data before it is written to disk, at no additional charge. There is no setup or configuration required, no need to modify the way you access the service and no visible performance impact. The data is automatically and transparently decrypted when read by an authorized user.

If you require encryption for your data, this functionality frees you from the hassle and risk of managing your own encryption and decryption keys. We manage the cryptographic keys on your behalf using the same hardened key management systems that Google uses for our own encrypted data, including strict key access controls and auditing. Each Cloud Storage object’s data and metadata is encrypted under the 128-bit Advanced Encryption Standard (AES-128), and each encryption key is itself encrypted with a regularly rotated set of master keys. Of course, if you prefer to manage your own keys then you can still encrypt data yourself prior to writing it to Cloud Storage.

Server-side encryption is already active for all new data written to Cloud Storage, whether for creating new objects or overwriting existing objects. Older objects will be migrated and encrypted in the coming months.

imageThis feature adds to the default encryption functionality already provided by Persistent Disks and Scratch Disks that come with Google Compute Engine. Together, this means that all data written to unstructured storage on the Google Cloud Platform is now encrypted automatically, with no additional effort required by developers. We’re happy to be taking this step in our commitment to evolve the security capabilities of our platform.


Jeff Barr (@jeffbarr) reported Push Notifications to Mobile Devices Using Amazon SNS in an 8/13/2013 post to his AWS evangelist blog:

imageDoes your mobile app keep on running in the cloud, even when the associated smartphone or tablet is closed? If so, you might want to proactively provide your customers with useful information. For example, a traffic app can warn of heavy traffic and a slow commute, allowing the user to arrive in time for their first meeting of the day.

image_thumb311_thumbPush notifications are short, alert-style messages you can send to users even when they are not actively using your app. The experience is similar to SMS, but it costs much less because it uses Wi-Fi or cellular data. Users can choose to acknowledge a push notification to launch your app and see more information.

Implementing push notifications can be tricky, especially when you target multiple platforms such as iOS, Android and Kindle Fire. Many customers do it by integrating directly with the push relay services that Amazon, Apple, and Google provide for their devices. These services each use different, platform-specific APIs, and you have to manage things like token updates or token invalidation by the services, along with token feedback when users upgrade their devices or delete your app. Also, the nature of mobile app distribution is such that successful apps can become popular almost overnight. Scaling quickly from zero to millions of devices, and tens of millions of daily notifications, can be challenging.

Customers tell us that this is just the sort of undifferentiated heavy lifting they like us to solve on their behalf. Today, we are enhancing the Amazon Simple Notification Service with Mobile Push, a new feature that transmits push notifications from backend server applications to mobile apps on Apple, Google and Kindle Fire devices using a simple, unified API. You can send a message to a particular device (direct addressing), or you can send a message to every device that is subscribed to a particular SNS topic (broadcast).

Best of all, you can start using this feature at no charge. The AWS Free Tier means all AWS customers can send one million push notifications per month across iOS, Android and Kindle platforms at no charge. After that, you pay $0.50 for every million publishes and $0.50 per million push deliveries.

How it Works
Here's what you need to do to create a mobile app that can receive push notifications:

  1. Create an app for a supported device and messaging API (Amazon Device Messaging, Apple Push Notification Service, or Google Cloud Messaging). The app must register with the local platform notification service using the device APIs in order to be able to receive notifications. For example, an iOS application would use the registerForRemoteNotificationTypes method. Although the specifics will vary from platform to platform, you will end up with some sort of token or identifier that is unique to the device. The code on the device will need to communicate this value to the server-side code. You could use an Amazon SQS queue or an SNS topic for this purpose.
  2. Create a server-side representation of the app using SNS's CreatePlatformApplication function.
  3. Register devices as your server code becomes aware of them by calling the SNS CreatePlatformEndpoint function. This function will return an ARN (Amazon Resource Name) that uniquely identifies the device.
  4. Send messages directly to a specific device by calling the Publish function with the device's ARN. You can easily scale this to handle millions of users by storing the endpoint ARNs in Amazon DynamoDB and using multi-threaded code on the server.
  5. Send messages to all devices subscribed to a topic by calling the same Publish function, but use the ARN of the topic. You can subscribe up to 10,000 devices to a single topic. For larger number of devices, use direct addressing as described in the previous step, or use multiple topics.

Mobile Messaging With PHP & Our Sample Application
In order to help you get started with this new feature as quickly as possible, we have put together a sample mobile push application. This application is provided in source code for all three of the supported platforms and messaging APIs.

I have an Android phone and used the AndroidMobilePushApp included in the ZIP file. Here's the most interesting part of the code:

public class ExternalReceiver extends BroadcastReceiver {
    @Override
    public void onReceive(Context context, Intent intent) {
        Log.i("ExternalReceiver","onReceive");
        Bundle extras = intent.getExtras();
        StringBuilder payload = new StringBuilder();
        
        for(String key : extras.keySet()){
            payload.append(String.format("%s=%s", key, extras.getString(key)) + '\n');
        }

        Intent newIntent = new Intent();
        newIntent.setClass(context, AndroidMobilePushApp.class);
        newIntent.putExtra(context.getString(R.string.msg_field), payload.toString());
            newIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_SINGLE_TOP);
        context.startActivity(newIntent);
    }
}

I used the newest version of the AWS SDK for PHP to write a simple application. My code lists all of the applications, all of the endpoints of the first application in the list, and then sends the message "Hello from PHP" to all of the endpoints. Here's all it takes to do this:

#!/usr/bin/env php
<?php

// Load the AWS SDK for PHP
require __DIR__ . '/aws-sdk.phar';

// Create a new Amazon SNS client
$sns = Aws\Sns\SnsClient::factory(array(
    'key'    => '...',
    'secret' => '...',
    'region' => 'us-east-1'
));

// Get and display the platform applications
print("List All Platform Applications:\n");
$Model1 = $sns->listPlatformApplications();
foreach ($Model1['PlatformApplications'] as $App)
{
  print($App['PlatformApplicationArn'] . "\n");
}
print("\n");

// Get the Arn of the first application
$AppArn = $Model1['PlatformApplications'][0]['PlatformApplicationArn'];

// Get the application's endpoints
$Model2 = $sns->listEndpointsByPlatformApplication(array('PlatformApplicationArn' => $AppArn));

// Display all of the endpoints for the first application
print("List All Endpoints for First App:\n");
foreach ($Model2['Endpoints'] as $Endpoint)
{
  $EndpointArn = $Endpoint['EndpointArn'];
  print($EndpointArn . "\n");
}
print("\n");

// Send a message to each endpoint
print("Send Message to all Endpoints:\n");
foreach ($Model2['Endpoints'] as $Endpoint)
{
  $EndpointArn = $Endpoint['EndpointArn'];

  try
  {
    $sns->publish(array('Message' => 'Hello from PHP',
			'TargetArn' => $EndpointArn));

    print($EndpointArn . " - Succeeded!\n");
  }
  catch (Exception $e)
  {
    print($EndpointArn . " - Failed: " . $e->getMessage() . "!\n");
  }
}

?>

Here's what shows up on the phone after I ran this code a couple of times:

And here is what I saw on the console:

List All Platform Applications:
arn:aws:sns:us-east-1:348414629041:app/GCM/Amazon_Mobile_Push

ist All Endpoints for First App:
arn:aws:sns:us-east-1:348414629041:endpoint/GCM/Amazon_Mobile_Push/dc8a5ae9-2f21-33a2-a8cd-7fafba642bf4

Send Message to all Endpoints:
arn:aws:sns:us-east-1:348414629041:endpoint/GCM/Amazon_Mobile_Push/dc8a5ae9-2f21-33a2-a8cd-7fafba642bf4 - Succeeded!

Mobile Messaging From the Console
You can manage the entire process of creating an app and registering devices from the SNS tab of the AWS Management Console. Here's a walkthrough.

Start by clicking the Add a New App button:

Enter the application's name, choose a push platform, and enter and the credentials (in this case an API key for Google Cloud Messaging for Android) for the platform:

Optionally, configure a set of SNS topics for notification of significant events:

Then, register the endpoints using the tokens supplied by each user device that registers for your notification service:

You can then send notifications to the endpoints:

And the message will appear on the device (this is a screenshot of the sample app):

Go For It
The new Mobile Push section of the Amazon SNS Developer Guide will help you to get started. You may also want to sign up for our August 29 Webinar ("New Mobile Push Notifications from Amazon SNS").

As always, this functionality is available now and you can start using it today in all public AWS Regions. I am really looking forward to hearing from you after you have migrated your existing application or built something new!


Barbara Darrow (@gigabarb) summarized her Told ya so: Amazon gears up more services for mobile developers post of 8/13/2013 as follows: “Updated: New AWS Simple Notification Service lets devs automate push notifications to iOS, Google and — oh yes — Amazon devices using one API:”

imageIf anyone doubted that Amazon Web Services is fully aboard the mobile applications bandwagon, check out its new Simple Notification Service for Apple, Google  and its own Kindle Fire devices. The company promises that Mobile Push for Amazon Simple Notification Service, or SNS, means devleopers can send notifications to all those platforms using one API.

This is a development that  UrbanAirship, a vendor specializing in automating push alerts, will likely be following.

According to Amazon:

image_thumb311_thumb“SNS Mobile Push alleviates the need to build and operate one’s own intermediary service, and enables developers to push once, deliver anywhere. This reduces the cost and complexity for developers, as they do not have to integrate and maintain different versions of the same push software for multiple mobile platforms.”

The service is free for up to one million notifications per month and then $0.50 per each additional million messages published. It’s been clear for months that AWS has its eye on mobile development, hiring up a storm for a new Palo Alto, Calif. mobile-focused group, for example.  And, seriously why would it not given that more people use smartphones and tablets as their primary devices?

In a blog post, Amazon CTO Werner Vogels said people want to be alerted about important news on their smart phones or tablets even if they have not opened their mobile apps.

” … baseball fans want to know as soon as their favorite team player hits a home run, so they can watch a video replay and catch the rest of the game. The rising proliferation of cheap and powerful sensors means not only apps but smart devices want to communicate important information. For example, your new car could warn you on your mobile phone when the door is not fully closed, so you can return to lock it properly.”

Amazon is hardly alone in this mobile craze. Microsoft is pushing Windows Azure as a development platform for mobile services, including push alerts. And there are raft of  mobile backend platforms and tools from smaller companies.

Some pundits, including GigaOM PRO analyst Janakiram MSV predicted (subscription required) that Amazon will put all the  mobile services it already offers and combine  (or expose) them in its own Mobile Backend as a Service (MBaaS) where it would compete with such companies as Parse (now owned by Facebook, Kinvey, Stackmob and Kii.

That’s looking like a pretty safe bet.

It’s a horse-race between AWS and Windows Azure to see who can add the most mobile services features the fastest.

Full disclosure: I’m a registered Gigaom analyst.


Werner Vogels (@werner) explained Making Mobile App Development Easier with Cross Platform Mobile Push in an 8/13/2013 post:

imageThis year as I hosted AWS Summits in 12 different cities around the world, I met thousands of developers who are building powerful new applications for smartphones, tablets and other connected devices, all running mobile cloud backends on AWS.

These developers want to engage their users with timely, dynamic content even when the users haven’t opened their mobile apps. For example, baseball fans want to know as soon as their favorite team player hits a home run, so they can watch a video replay and catch the rest of the game. The rising proliferation of cheap and powerful sensors means not only apps but smart devices want to communicate important information. For example, your new car could warn you on your mobile phone when the door is not fully closed, so you can return to lock it properly.

image_thumb311_thumbDevelopers address these use cases with push notifications, which are short messages pushed from a backend server to a specific application on an end user's mobile device. Push offers similar user experiences to SMS, but with enhanced functionality and at a fraction of the cost.

While we have made it easy to build great mobile apps with AWS that use on-demand, scalable and reliable building blocks like EC2, DynamoDB, SQS and many others, supporting push notifications at large scale remains incredibly complicated for our customers. Amazon, Apple, and Google each maintains a free relay service that delivers notifications via persistent connections to devices running the platforms they own. Supporting millions of users on multiple mobile platforms means integrating with each of these platform-specific relay services, thus introducing operational complexity and cost for our customers.

Customers tell us that virtually all use cases for push notifications require an intermediary application to manage security tokens, queue outgoing messages, and abstract platform-specific APIs. Developers have told us that they build and maintain their own intermediary relay applications, even though they find the process of operating these intermediary relay applications to be painful and error prone. Building these proxy or relay services to be reliable and scalable so that you can push millions of notifications a day is difficult and our customers want us to make it easier.

Announcing Amazon SNS with Mobile Push

Today, we are enhancing Amazon Simple Notification Service (SNS) with Mobile Push to meet this customer request and support cross platform, device agnostic push notifications to iOS, Android and Kindle mobile devices natively within AWS. SNS Mobile Push alleviates the need to build and operate one’s own intermediary service, and enables developers to push once, deliver anywhere. This reduces the cost and complexity for developers, as they do not have to integrate and maintain different versions of the same push software for multiple mobile platforms. Instead, SNS Mobile Push enables notifications to be delivered directly to everyone who wants to receive them – regardless of which mobile, desktop or connected device they happen to be using.

Developers tell us that managing push notifications at large scale distracts them from building great apps. In some cases, this work is complex enough that it actually limits what the developers are willing to offer to their customers. For example, Crittercism tells us that delivering timely push notifications became so burdensome as they grew to touch 600 million devices, that they chose to stop offering push notifications in the past. They are now able to offer push notifications to their customers again using Amazon SNS and can notify tens of millions of users in a matter of seconds about critical app performance issues.

We chose to enhance Amazon SNS instead of building a separate mobile notification service because Amazon SNS was designed from day 1 to support multiple protocols and delivery methods (Email, SMS, SQS, HTTP etc.) and already operates at a massive scale delivering billions of notifications every day over these delivery methods.

By leveraging the scale of AWS and the existing SNS technology, we are able to offer the same cost effective prices for Mobile Push that we offer for Amazon SNS. Customers can send their first million notifications per month for free and then pay only for what they use beyond that, at $1.00 per million push notifications ($0.50 per million publishes and $0.50 per million push deliveries). They can use Mobile Push to target unique messages to individual devices, or broadcast identical messages to multiple devices at once.

Customers tell us SNS Mobile Push offers lower costs and operational burden, in addition to powerful scale and speed. For instance, Earth Networks used to build and manage its own push infrastructure but has now migrated to SNS Mobile Push because Amazon SNS Mobile Push is less expensive than the self-managed service they used to operate.

To get started right away for free with Amazon SNS Mobile Push, visit http://aws.amazon.com/sns. For more information, please see the Amazon SNS documentation, including a getting started guide and reference apps for each mobile platform.


<Return to section navigation list>

0 comments: