Sunday, December 30, 2012

SharePoint 2013 Licensing Overview

For people who have been around SharePoint for a long time, one of the dreaded area to have an conversation with your upper management has been about SharePoint Licensing. I can say that IT managers will have few pleasant surprises in store when they try to get their heads around SharePoint 2013 Licensing model.

For starters, here are few key changes in SharePoint 2013 licensing model that will bring smiles on IT Managers faces:
   - No need to buy SharePoint Internet Licenses: You heard it right. The license to host SharePoint for anonymous users and public facing websites that used to cost 40-50k per server is not needed anymore. So, basically if you have the core Server license you can host a public facing website without needing to pay anything extra.  

  - No need to buy CALs for external users: One of the big confusion with extranet scenarios has been whether you need to buy separate CALs for your external users such as vendors, contractors, suppliers, etc who are not typically your employees but may need to access little bit of information or  forms on your SharePoint extranet. Here's the exact words defining who external users are from one of the deck I found from Microsoft on this topic.
       External users means users that are not either your or your affiliates’ employees, or your or your affiliates’ onsite contractors or onsite agents.   

  - Search Consolidation into Core SharePoint License: One of the big change is with respect to Search capabilities. All the search capabilities have been merged into the core SharePoint Server Licenses (Enterprise and Standard), and you don't need to buy a separate FAST license. This, i believe,  has also helped in streamlining the end user's experience with respect to search and data discovery.

I wanted to cover these three main changes with respect to SharePoint 2013 on-premise licensing in this post, but there are other changes being introduced with respect to features and SKUs in SharePoint 2013 on-premise as well as O365 licensing.

Here's an image of the key components in SharePoint 2013 on-premise as well O365 followed by another diagram illustrating the key changes from 2010.


Figure: SharePoint 2013 Licensing Options

      Please note the absence of other add-on licenses such as Internet Site, Search Server, FAST Server, etc.                                        


Figure: SharePoint 2013 Licensing - Key Changes
My goal for this post is to provide an overview of the key changes that I see in SharePoint licensing model, and help people who are trying to get a high level overview of the same. It's purely based on my personal understanding and does not reflect Microsoft's intended vision or my employer's view.  As always, any decision with respect to Licensing should be validated with your Microsoft representative, and is heavily dependent upon your unique requirements. You are responsible for any decision that you make based on above information.   

Let me know if you have any feedback and/or find a flaw with my understanding.
Cheers!!

Monday, December 03, 2012

SharePoint 2010: Classic Mode Vs Claims Based Authentication

One of the setting that you need to pick when creating a web application in SharePoint 2010 is the Authentication type. The two options that you have are:
   1. Claims Based Authentication
   2. Classic Mode Authentication

Authentication in SP 2010 - Classic Mode Vs Claims Based

Classic Mode: This is no different from the traditional AD based authentication. One contraint with classic mode is that you cannot implement "Forms Based Authetication" later on, if you want to.
Although you can convert "classic mode" to "claims based" but will have to use shell script. There is no UI available in  Central Admin to do it.

Claims Based: Claims based authenticaition gives you option to autheticate users using AD as well as Form based authentication for the same web application. It's based on Windows Identity Foundation, and can enable several advanced authentication scenarios as described in this article:
http://msdn.microsoft.com/en-us/library/hh394901(v=office.14).aspx

Claims based authentication would be the preferred approach for most users. Classic Mode may be selected if mandated by corporate policy or for backward compatibility. Microsoft is also showing more commitment towards broader adoption of Claims based authentication across various product lines (Azure, CRM, etc), and therefore is a better choice for any new development.


 Update (November 2013) :  I was looking at SharePoint 2013 (Preview Version)  and it seems there is no option for users to select Classic Mode Authentication when creating a new Web Application. Although this can change by the product release, it definitely is an indication of the direction that Microsoft is going, which is to encourage Claims based Authentication. 
 

Friday, September 07, 2012

Windows RT Vs WinRT vs Windows 8 !!

One of the things that I was confused about in the last few hours, and I am confident that many more people are going to be confused about in next few months is the difference between "Windows RT" and WinRT. Here is the simple version:

Windows RT:
     1. It's an OS.
     2. It's a variation of the Windows 8 OS that Microsoft has specifically designed for devices running ARM devices.
     3. (In case you are wondering) ARM is an architecture used by many processor companies to design their chips, including Qualcomm, nVidia, Texas Instruments and several others.

What that means is that when you go to the Settings -> PC Info screen of a tablet device running an ARM processor, it will show you "Windows RT" and NOT "Windows 8". So, it's a full fledged OS that is branded and sold separately by Microsoft to tablet OEMs (aka manufacturers) who are using ARM processors in their tablet devices. Infact, it used to be called as "Windows on ARM" earlier but was later on re-branded as Windows RT.

WinRT:
     1. It's a runtime.
     2. Conceptually, it's not very differemt from .net, java or any other runtime in the sense that it's main goal is to create a cross-platform application architecture on Windows 8 that supports multiple languages (C++, C#, JavaScript, etc). 

Difference between "Windows RT" and "Windows 8":
     Now that we know that Windows RT is an OS, I am sure that some of you are wondering how is it different from Windows 8. Here it is!!
     1. Not much different from user experience point of view as both support Metro UI.
     2. Windows RT is not sold directly to consumers, and is meant only to be sold to device manufacturers (aka OEMs).
     3. The goal behind Windows RT is to give end users consistent experience across tablets being offered by various manufacturers (including Microsoft's own device called as Surface).   
     4. Windows RT will come pre-pakaged with MS Office, whereas Windows 8 users will have to buy (and install) Office separately.
     5. There are similar differences in terms of applications shipped out of the box in Windows RT and Winodws 8, as well as the kinds of applications you can develop/install/uninstall on them. Windows RT seems to be more locked-down version of the two.
     6. You also cannot use Win32 and COM APIs on Windows RT, so you are pretty much restricted to using WinRT APIs. Although, over  next few days I am going to be closely working on a project that requires access to underlying System Information on a Windows RT "ARM" device. It seems that there may be a way to access a subset of Win 32 API on a Windows RT device. If it works, I will post my experience in a subsequent post.  


Cheers!! I hope it helped! !

 

Friday, December 02, 2011

Metro UI and Windows 8!!

I spent couple of hours this week in trying to understand what is Metro UI. While Microsoft is definitely betting big time on Metro UI, with it being incorporated in Windows Phone 7, Zune and most recently in Windows 8, there doesn't seem to be much information availabe on what it is, how a developer can benefit from it, and what set of tools, technologies and best practices are available for a developer to use it in real-life applications.

Here's my attempt to provide some clarity on the topic:

- Metro UI is a set of UI best practices:
     What this means is that Microsoft is expecting all future development on Windows platform to follow these best practices when it comes to implementing UI.

- Inspiration:
     It's widely accepted that Metro UI is partly inspired by signs commonly found at public transport systems, for instance on the King County Metro Transit, a public transit system that serves the Seattle area.

- History:
     While there are signs of similar UI paradigm being tried and tested by Microsoft on several products in the past such as WIndows Media Center, XBox, Microsoft Live, etc, it was Windows Phone 7 that is credited for brining MetroUI in front and center of the Microsoft UI strategy.

- How to learn it:
    The overall concept behind Metro UI is technology agnostic, that is, you can decide to implement MetroUI for your applciation in any of the UI language that you are familiar with (ASP.Net, AJAX, Silverlight, etc) . Reality is that the majority of Metro UI development has come from Windows Phone 7 (now Mango) which is done in Silverlight. Although there is very less clarity at this point in time on "Silverlight Vs HTML 5" issue,I think most of the new development will happen in HTML5 targeting Windows 8 as well as other form factors such as tablet and mobile. 

   One of the best way to learn Metro UI woudl be to build a WIndows Phone Application (using Silverlight) or Windows 8 application (using HTML 5). Following are few links that can help with the ramp up:

Windows 8 Dev Preview: http://msdn.microsoft.com/en-us/windows/home/br229518

Metro Design Principles and Tutorials: http://www.microsoft.com/design/toolbox/tutorials/windows-phone-7/metro/

HTML 5 Tutorial: http://www.mywindowsclub.com/resources/5011-HTML-Tutorials.aspx

Windows 8 New Features: http://www.mywindowsclub.com/resources/4653-Windows-new-features.aspx
       
Future of Silverlight: http://www.mywindowsclub.com/resources/4733-Windows-embracing-HTML-what-future.aspx 

Cheers,
Piyush

Wednesday, October 05, 2011

Settled in Houston, TX + How to Shut down Windows 8?

Last few weeks have been very hectic. I moved to Houston from Redmond, we/iLink opened a new office here. Finally, dust is starting to settle down and I cannot wait to dive deeper into Crescent, Denali and Windows 8.

Most of the Windows 8 functionality from the developer preview is working on my laptop (Lenovo X61 Thinkpad). Windows 8 is looking nice. Microsoft has changed many common functions. For example, it took me couple of minutes to figure out how to shut down the laptop. For people out there who have just installed Windows 8 and are struggling with the same problem, here is how you do it:

1. Take your mouse at the bottom-left corner of the screen
2. Wait until the “Charms” menu appears with Search, Share, Devices and Settings option
3. Click on Settings, and a bar appears on the right side of the screen.
4. Select Power option
5. Select Shut Down


Let me know if you are able to find a better way or shortcut to get to the power menu.

I am also not able to get touch working on this laptop. Since there is no touch driver available for Windows 8 on Lenovo site (understandably), I have tried installing Windows Vista and Windows 7 versions but to no use. Let me know if any of you have run into similar issues and have any suggestions.

Feels good to be back.

Cheers!!

Friday, July 01, 2011

iLink wins the Microsoft Mobility Partner of the year Award !!

I am glad to announce that iLink Systems has won the 2011 Microsoft Mobility Partner of the Year Award. I am proud of the work that iLink's mobility team did for various customers in B2B and B2C space. Stay tuned for the official press release.

Here is the url for the award page: http://digitalwpc.com/Awards/2011-Awards?awardId=6&categoryId=104#fbid=99pyETUwxHe

I am also proud that Invensys - one of the customers I work very closely with - also won the Alliance ISV Industry Partner of the Year Award.
http://digitalwpc.com/Awards/2011-Awards?awardId=9&categoryId=119#fbid=99pyETUwxHe 

Many of the Windows Phone 7 projects that we did had an Azure components to it. Cloud and Mobility complement each other very well.  A very common design pattern that is emerging is to implement a mobile application on Windows Phone 7 that leverages Windows Azure based compute , messaging and SQL Azure based storage services to provide an unprecedented level of pervasive availability and scalability.

Check out the Windows Azure Toolkit for Windows Phone 7 at codeplex : http://watoolkitwp7.codeplex.com/  

Also be on a lookout for the Windows Phone 7 Mango release. I am hearing great things about it.
I also am looking forward to WPC to try my hands on (hopefully) Mango and Crescent. Ping me if you are attending WPC!

Cheers!!

Wednesday, May 25, 2011

Windows Azure Traffic Manager - Load Balancer In the Cloud !!

It's pretty well known that Azure essentially is a set of VMs managed by a set of very sophisticated load balanced layer (that also manages VMs lifecycle and other management/instrumentation functions). But till now, there was no way for users/developers to balance the load on various instances and services that you have subscribed to in a streamlined way.

One workaround was to use RoleEnvironment.StatusCheck event to get the latest state of your service, and call SetBusy to take a particular instance out of load balancer queue for 10 secs. Similarly, there were few other worksroudns that required you to keep track of what your VMs are doing, and accordingly re-direct the traffic to prevent any one instance from being bogged down and become irresponsive.  Not any more!!

With the launch of "Windows Azure Traffic Manager" , Microsoft has introduced a new feature that allows customers to load balance traffic to multiple hosted services (even across data centers and regions). Developers can choose from three load balancing methods: Performance, Failover, or Round Robin. Traffic Manager will monitor each collection of hosted service on any http or https port. If it detects the service is offline Traffic Manager will send traffic to the next best available service.

This feature is offered under the broader umbrella of "Windows Azure Virtual Network name" feature along with "Azure Connect" service. This is currently in CTP, and you can request access from the Winodws Azure Site.
MSDN has also published a hands-on lab that works really well, if you want to try this feature out.

Cheers!!

Wednesday, April 20, 2011

High-availability and Disaster Recovery (HADR) in SQL Server “Denali” Release

Lot of exciting information trickling out of Microsoft on the upcoming "Denali" release. One of the most promising enhancement I came across today is in the area of high-availability and disaster recovery - HADR.

The core conecpt behind this feature is something called as "availability groups". Availability groups essentailly are a set of failover partners (DBs), known as availability replicas. Each availability replica possesses a local copy of each of the databases in the availability group. One of these replicas, known as primary replica, maintains the primary copy of each database. The primary replica makes these databases, known as primary databases, available to users for read-write access. For each primary database, another availability replica, known as a secondary replica, maintains a failover copy of the database known as a secondary database. 
 
Following image helps explain the concept:
SQL Denali Availability Group

Here's a really good ste-by-step guide to configure HADR in Denali using Availability Groups: SQL Server Denali - AlwaysON (HADR): Step-by-Setup setup guide

I wonder if a SQL Azure instance is a supported secondary replica. Will post the result as soon as I get some time to try it out.

Cheers!!

Thursday, April 14, 2011

SharePoint and BI : Better Together !!

I feel very motivated by the resposne to this morning's webinar that we did on the BI Capabilities of SharePoint. My intuition is starting to turn into a belief that data analysis and  management is going to gain significant momentum for two key reasons:
    1. Management Pressure: Upper and mid management will continue to be under pressure to show value by cutting cost and increasing efficiency. This in turn will put pressure on IT to evolve their BI strategy to include Self Service BI so as not to get bogged down by increase in demand. Self-service BI will be thier saviour as they will not be able to scale up by hiring more people because of continued cost pressure.

   2. To make sense out of the information explosion: Collaboration features in tools like SharePoint are resulting in an unprecedented level of information explosion. Organizations will be hard pressed to make sense out of all that informantion for multiple reasons, including;
                a. Legal Liability
                b. Getting Pulse of Employees Sentiments
                c. Knowledge Sharing
                d. Data Quality and Accuracy
                e. Many more.

  I believe, SharePoint is uniquely placed to capitalize on this opportunity. I cannot think of a better delivery engine for all this data analysis and mining capablities. I also cannot wait for Denali and Crescent to coem to market with thenext level of coolness.

Here are few slides that I came up with for today's event that maps various Self-Service state that organizations can choose to be in (depending upon their unique requirements), and components from Microsoft BI stack that can enable those requirements.  I have included only the delivery components in the stack, and therefore SSIS, SSAS and other underlying components are missing.
 
** If you need non-watermarked version of these images for non malicious purposes, just drop me a note and I will be glad to send a copy.


The BI Continuum

Capabilities Mapped to target BI Strategy options

Technology Enablers Mapped to Target BI States 
 ** If you need non-watermarked version of these images for non malicious purposes, just drop me a note and I will be glad to send a copy.

   I am tempted to add the coud-Azure dimension to the Microsoft BI story, but that's probably in another post.
Cheers!! 

Thursday, April 07, 2011

Denali, Crescent - BISM and (future of) UDM !!



I got an early preview of Denali and Crescent couple of weeks ago. Many cool features in pipeline that I cannot yet talk about (NDA reasons), but be on a lookout for official communication trickling down from Microsoft as they near the CTP/release date.


I want to focus on BISM in this post as many people are wondering (especially those who earn their bread, butter and beer) by desinging, creating and maintainig UDM models. BISM stands for "Business Intelligence Semantic Model" and is a new data modelling language. Therefore, it obviously raises concerns in people's mind about the future of UDM. 


 I found this great post by T. K. Anand who is "Principal Group Program Manager" in SSAS team. He has done a god job in clearing the confusion about BISM and UDM, and how they compliment each other in his post titled Analysis Services – Roadmap for SQL Server "Denali" and Beyond.

I have bulleted out few key takeaways from the post:
    0.  Crescent is going to be Microsoft's flagship offering in the area of ad hoc reporting and data visualization experience.

   1. BISM (Business Intelligence Semantic Model) will become part of  Analysis Services and provide an alternative to UDM (OLAP) to power Crescent as well as other Microsoft BI front end experiences such as Excel, Reporting Services and SharePoint Insights to provide these capabilities.

   2. The BI Semantic Model is a relational (tables and relationships) model with BI artifacts such as hierarchies and KPIs. It unifies the capabilities of SMDL models with many of the sophisticated BI semantics from the UDM. However it does not replace the UDM.


  3. BISM and UDM are going to co-exist. BISM will incrementally enable  mroe and more "Self Service BI scenarios" ,where end users are able to do simple data analysis using PowerPivot, SharePoint, etc; On the other hand, UDM will continue to be the tool of choice for BI proffessional for IT managed enterprise scale BI projects.


 4.  UDM is a mature and industry leading technology and is here to stay. UDM (OLAP) models are not being deprecated.

 5. For BI applications that need the power of the multidimensional model and sophisticated calculation capabilities such as scoped assignments, the UDM is the way to go. For a broad majority of BI applications that don’t need that level of complexity, the BISM will offer a rich, high performance platform with faster time to solution. 

6. Last but not the least, Crescent current UI is all Silverlight. It will be interesting to see how the integration with Excel, SharePoint and other end user tool plays out.

Here is the new model diagram:

BI Semantic Model in Crescent
 In reality, Crescent fill a much needed gap that Microsoft had in the Self Service BI space. Crescent is just taking the story forward that was started by Microsoft's PowerPivot. It will be interesting to see how it plays out with existing players in this space ranging from old timers like SAP's BusinessObjects Explorer, IBM's Cognos Express and Information Builders' WebFocus Visual Discovery; and new companies like MicroStrategy, Tableau, Target and QlikTech.

I am just starting to play with some of the self service capabilities in Microsoft stack, and will look forward to posting what I learn in subsequent posts.
Cheers!!

Saturday, March 26, 2011

Convergence of "Server AppFabric" and "Azure AppFabric"

It's good to see the prediction that I made in December 2010 coming true :). In November 2010, I wrote a post to show the differences between "Azure AppFabric" and "Server AppFabric". Intent was to help people who were confused because of the similarity in name of these two very different products.
   Refer: http://piyushprakash.blogspot.com/2010/11/de-mystifying-windows-azure-app-fabric.html

  I had concluded the post by saying that there is great potential if the capabilities from the "Server AppFabric" were merged with Azure. It's good to see that Microsoft is starting to offer some of the capabilities from the server version of AppFabric in the form of cloud based services.

 For now, I will just provide the difference in the visual representation, and look forward to diving into nitty gritty in subsequent blogs.

   Earlier:

  Now:
          
       
As you can see,  addition of services such as Caching, Composite Apps, etc within Azure AppFabric indicates a move towards evolving it from being "a basic service bus with access control", to being a full-fledged scalable and secure middleware with support for multiple integration patterns.

I hope to dive deeper into some of these features into future posts.

Sunday, March 20, 2011

Windows Azure CDN Simplified!!

Okay. I finally got around to trying out Windows Azure CDN. CDN could be a god-send for companies trying to serve a global userbase. Currently, lot of them essentially force their global users from around the world to connect to US based data centers. The latency in delivering content becomes really high when content happens to be something like video, audio or similar such media that requires high bandwidth throughput.

  The Windows Azure Content Delivery Network (CDN) offers a global solution for delivering such high-bandwidth content that's hosted (cached) in strategically placed locations to provide maximum bandwidth for delivering content to end users.

In it's current avatar, CDN is not suitable for many requirements (that can potentially benefit from such a service) because of inherent limitations in terms of secuirty, content type supported, etc. But I defnitely see those limitations going away in next few releases.

Following are the key attributes of this service that one should understand to be able to decide if CDN is the right fit for a solution:

    1. CDN delivers cached content at strategically located physical nodes in the United States, Europe, Asia, Australia and South America. A latest list of nodes is available at the following url: http://msdn.microsoft.com/en-us/library/gg680302.aspx

    2. Microsoft cannot (and doesn't) guarantee that the content will be served from the closest possible node to the user because it's dependent upon many different factor such as current load, availability, node capacity, etc. This is very critical for organizations that have conpliance requirements such as HIPAA, etc.

   3.  It works better for Windows Azure blobs and static contents. It may not be a good idea to use CDN for contents that are frequently refreshed or need to be generated based on real-time data.

   4. CDN is an add-on feature to your Azure subscription (that needs to be enabled through the Azure Management Portal) and costs extra.

   5. Only blobs that are publically available (anonymous access) can be cached with the Windows Azure CDN.

   6. If you use CDN to cache Azure hosted services, it's recommended to enable CDN only for static content. Enabling CDN for dyanamically generated content or services will cost more, and may even have a negative impact on the performance.

  7.  You can set the "time-to-live" setting for content in CDN. If a content has expired in CDN, the call is re-directed to the blob or stoarge service to get the latest content; and is cached by CDN for subsequent calls (till the content expires again after time-to-live setting passes).

Happy CDN'ing !!

Thursday, March 10, 2011

WIndows Azure SDK 1.4 Release !

Microsoft released the Windows Azure SDK 1.4 yesterday.  The release fixes several significant bugs including the nasty RDP bug and adds capabilities like multiple administrator support from the enhanced Windows Azure Connect portal.

  Two new features that I am most excited about is "Azure CDN for Hosted Services" and "CDN content delivery over https". 

Windows Azure CDN for Hosted Services: You can now use the Windows Azure Web and VM roles as"origin" for objects to be delivered at scale via the Windows Azure CDN.

Serve secure content from the Windows Azure CDN: A new checkbox option in the Windows Azure management portal enables delivery of secure content via HTTPS through any existing Windows Azure CDN account.

Windows Azure 1.4 SDK is available for download here.

Cheers!!

Sunday, January 23, 2011

Local Development on SQL Azure

There seems to be lot of confusion about how to develop locally on SQL Azure. Basically, if you are an existing .net develper who is just starting to play with Azure; it's very common to look for a download that you can install in your dev environment that coudl simulate SQL Azure. Well.. It doesn't exist !!

Here are few pointers:

1. SQL Azure is a DB hosted by Microsoft in the cloud. You cannot install it.
2. To use SQL Azure, you will need to subscribe to one of the offerings from Microsoft. Typically, 3 months free introductory special is a good place to start while you decide what your exact needs are. 
3. There is no special SDK/tool needed to connect to SQL Azure DB. Existing data accessing frameworks like Entity Framework, LINQ to SQL, etc will work with SQL Azure.
4. You will need SQL Server Management Studio 2008 R2 to be able to connect to SQL Azure and run queries. SSMS does not provide advanced GUI to manage SQL Azure yet like on-premise SQL installations. But I have seen Denali preview and it's going to add whole new set of features in terms of what you can do with SQL Azure.
5. SQL Azure is NOT 100% on-premise MS SQL. For example, it supports only a subset of the T-SQL availabe in on-premise version. Many admin features are also not available. One good way to ensure that your code would run in SQL Azure without much issues would be to develop against SQL Express edition.
6. One other good area to understsand would be the   DB size limitation in SQL Azure, driven by your Azure subscription level. Currently, the maximum available DB size is 50 GB. Microsoft is about a launch a new offering to be able to create horizontal partitions of SQL Azure DBs as a way to scale up beyond 50 GB.
   
 I hope it helped. See you in cloud!!

Saturday, January 15, 2011

Current State of Windows Azure (January 2011) - Part 1

I provided an overview of Azure in January 2010. That was a long time ago; and (stating the obvious) Azure has come a long way since it's early beta release. In this post, I make an attempt to present the current state of Windows Azure, what has changed; and the direction in which I perceive it to be going in future. I will also make an attempt to highlight various new features that were added in the November 2010 release.

Current State:
         Since the initial preview when Azure primarily was a way to build Web Role and Worker roles and capable of building very rudimentary web applications, it has emerged into a set of streamlined services that can enable organizations to build scalable LOB (Line Of Business) applications without sacrificing much of the functionalities that they are able to support in an on-premise solution. I have been pleasantly surpirsed by the level of secuirty features that Microsoft has been able to add; and continue to add to this platform.

        The broader platform can be divided into three kinds of services:
                     1. Windows Azure
                     2. SQL Azure
                     3. App Fabric
    
          Additionally, there are few other key components that add tremendous value to the Microsoft cloud story such as:
  • Platform Management Portal
  • Local Development Environment
  • Marketplace
   Following digram provides a good overview of the various components that come together to form Windows Azure:

Windows Azure Components
Windows Azure:  
            Windows Azure has evolved into a platform that is hosted inside (geographically distributed ) data centers, and you can deploy your applications to that underlying world class infrastructure. How you do that is by utilizing a set of services (similar to APIs in the on-premise world) that Azure provides.
    You pay based on the amount of service you use (a.k.a. Utility Computing).

Service #1 : Compute Service
           One simple way to understand this service is to think of it as a meter that keeps track of the amount of  CPU cycle that your code uses. There are currently three ways of writing your code (dependent upon what you want your code to do). These are called roles.
         a. Web Role        => Used for front-end development
         b. Worker Role   => Used for background processing
         c.  VM Role        => Used for hosting your pre-configured Hyper-V image  

Service # 2 : Storage Service
        There are two key types of storage service:
           1. Windows Azure Storage: It's the persistent storage in cloud, but is NOT relational. Currently, there are four types of Azure storage: blob storage, table storage, queue storage and Azure drives.
It's SIGNIFICANTLY cheaper than the SQL Azure (relational version), and good for large datasets that don't necessarily needs relational and Querying capabilities of SQL Azure.

          2. SQL Azure: Variant of SQL Server hosted in Azure with T-SQL kind of querying capabilities. It's more expensive than Windows Azure storage.

     Azure Storage Vs SQL Azure:
           Once simple way to understand the difference:
                      Azure Storage => Raw Storage of objects that you can query using REST or managed API.
                      SQL Azure => Processing engine on top of Storage that has data-processing capabilities (queries/transactions/stored procedures) and therefore, cosumes more resources and therefore, is more expensive.  
         
        One good strategy can be to store the high-volume data in low-cost Azure Storage; and use (expensive) SQL Azure to store indexes (pointing) to the data. Also, there are size limitation of the SQL Azure instances that can be overcome by using partitioning, but improtant to consider during architecting your azure application.

Service # 3: App Fabric
           This is one service that has come a long way since it's initial debut; and is still going through signifincat enhancements. This sevrice is the building block behind many LOB applications, and provides functionalities typically provided by middle tier in a 3-tier on-premise application. It enables web services, service bus, service registry, orchestration, secuirty and other capabilities in the cloud.   
           Each of these features add their own unique value to Microsoft's cloud story, and I plan to cover them in next few posts.

 I hope this post gave a good overview of key Azure pieces, and look forward to diving into details in next few posts.

Cheers!!

Sunday, December 05, 2010

Windows Azure Management Portal gets a facelift !!

Microsoft released a new UI for the Windows Azure Management Portal.
     - The new layout is far more intuitive then earlier one.
     - I am glad that they have not over-utilized Silvelright animations. Silvelright Animations are look cool when demonstrating or selling someting, but i have serious doubts about their usability in a day-to-day operational UI implementation.
     - Integartion of your all your Azure services and subscription in a single portal is a good, welcoming and much needed enhancement. I am especially loving the SQL Azure admin features built into the portal. Infomration Cube with DB usage statistics is nice feature as well that points to the increment level of maturity SQL Azure is going to see over new few releases.
     - Ribbon UI will also go a long way towards establishing consistent experience with other Microsoft Offcie products. I hope to see Ribbon also getting incorporated into Microsoft's Server and tools business over next few months and years.

    Azure still needs several cycles and set of services to be a true app dev platform, but this new portal seems to be a good step towards creating an experience for users that can be extended to new services ( as and when they are launched).

  I also like that they have link to the old portal, in case someone is in the middle of a project and doesn't want to get familiar with new portal, they can always swicth to the old portal. So not like some of Microsoft's other software releases !! :)


There are few glitches in the portal, but I hope they will get fixed in next few weeks/months.
Enjoy!!

Thursday, December 02, 2010

Windows Azure SDK 1.3 release!!

I got a demo from Microsoft of some of the goodies that is being shipped with Windows Azure SDK 1.3 release. It's made significant progress since the previous release, especially in the area of IAAS (infrastrucutre  as a service). The new Windows Azure management portal (Silverlight front-end) is also streamlined to support new features and services. Here's some of the key features budled in new release. Cannot wait to try them out:

VM Role (Beta): Create your own custom VHD based on Windows Server 2008 R2 and deploy it to the cloud. This feature makes it so much easier to migrate existing applications to the cloud, reducing management and hosting costs.
•  Extra Small Instance Size (Beta): Run an extra small compute instance for only $0.05 per hour.
•  Remote Desktop Access: Connect to individual service instances with the remote desktop client as you would with any deployment in a hosted scenario.
•  Full IIS Support in Webroles: Host your web applications in IIS and configure IIS to suit your needs.
•  Elevated Privileges: Perform tasks in a service instance with elevated privileges.
•  Virtual Network (CTP): Use Windows Azure Connect to accomplish IP-level connectivity between on-premises en the cloud.
•  Network Enhancements: Restrict inter-role communication and use fixed ports on InputEndpoints.
•  Performance Improvement: Experience better performance when developing in the development fabric.
Cheers!!

Wednesday, December 01, 2010

De-mystifying Windows Azure AppFabric

There seems to lot of confusion about the Windows Azure AppFabric. Lot of it is primarily because of many people trying to over-sell the capabilities of Azure Appfabric based on the marketing collaterals published by Microsoft. Another big reason behind the confusion is the similarity in name of an entirely different product from Microsoft called as "Windows Server AppFabric".

"Windows Azure AppFabric" Vs "Windows Server AppFabric":
            1.  They are two very different product designed for two very different purposes.
            2.  "Windows Server App Fabric" has come out of Microsoft projects like Dublin and Velocity; whereas "Windows Azure App Fabric" is the next version of what was called as "Windows Azure .NET Services".
            3.  As I will explain in below post, they can compliment each other to enable some really cool capabilities.

Windows Server AppFabric:
   1. It's part of Windows Server (on-premise)
   2. It aims to help developers who are trying to build on-premise web applications
   3. Following are the key areas where it tries to provide help (also called as core capabilities):
                   a. Caching: This is used to speed up access to frequently accessed data such as session info in an ASP.NET application. This is done using a feature called as "AppFabric Caching Service".
                   b. Hosting of composite applications: These are typically the applications built using "Windows Workflow Foundation" or "Windows Communication Foundation". The AppFabric management is integrated into the IIS , and can be used to deploy, manage, and control your services.

Step 1 of the the wizard that you will go through to configure App Fabirc for a windows server gives a good high level overview of some of the core capabilities:

 
    Additionally, tight integration with PowerShell and System Center can also help streamline various capabilities provided by Windows Server App Fabric.

Windows Azure AppFabric:
        Initial focus was to enable applications hosted in cloud be able to talk to applications hosted behind firewalls (on-premise)  in a streamlined secure way. The product has evolved to enable many different new scenarios such as "mobility - consumption of cloud hosted services by mobile and smart devices", "messaging relay for communication between multiple desktop/on-premise/cloud applications" and other features typically associated with an ESB (Enterprise Service Bus) implementations.

        Currently it comprises of two key components (aka services), but I definitely see this feature going multiple enhancements as cloud and on-premise technologies start to merge. Here is the official definition of the two features:
            a. Service Bus - Helps to provide secure connectivity between loosely-coupled services and applications, enabling them to navigate firewalls or network boundaries and to use a variety of communication patterns. Services that register on Service Bus can easily be discovered and accessed, across any network topology.

            b. Access Control - Helps you build federated authorization into your applications and services, without the complicated programming that is normally required to secure applications that extend beyond organizational boundaries. With its support for a simple declarative model of rules and claims, Access Control rules can easily and flexibly be configured to cover a variety of security needs and different identity-management infrastructures.

                            

 Conclusion:
     While Windows Server AppFabric is a product that is targeted to make life easier for developers who are writing WCF and WWF based application to be hosted in IIS, Azure AppFabric is focused on enabling consumption of those (on-premise) services/workflows from cloud. The combination of these two products can enable unprecedented level of  integration between organizations (vendors/suppliers/customers/public services/etc).

Sunday, November 14, 2010

Windows Azure Instances stuck in the Initiatilizing, Busy or Stopping State

Although there could be many reasons for an instance to be stuck in a non-ready (busy/Initiatilizing/stopping) state, I have experienced a common (and silly) reason. It's the the number of instances that you have configured in the portal ( or through the "ServiceConfiguration.cscfg" file ).

I spent about one hour uploading (and of course deleting) exact same Azure package with a single instance of the web and worker role.  Sometimes it would upload and put the instances in Ready state right away, and sometimes it would just get stuck in the initializing or busy state. I have started experiencing this behaviour in last few months. Basically, this has become more frequent with the increase in Azure adoption. Also, this is more frequent duirng day-time (hypothesizing) when there are probabaly more people trying to upload thier packages to the Azure portal.

It turns out (and as warned through a dialog box) when you are trying to upload the package to Windows Azure portal, Microsoft doesn't guarantee any SLA if you are running only a single instance of either web or worker role. In addition to that, I am almost getting certain (after wasting one hour to figure any other logical reason) that they have put in some logic in the Load balancer that de-prioritizes any single instance configuration in favour of the multi-instance configuration during package upload. So, if you are short on time (or want to save some time ) during package upload and instantiation on Azure portal, i would recommend changing the number of instances to atleast 2 even during the develop-deploy-test cycle. This may cost little more in Azure cost (two instance cost VS one ) but I would say it's definitely worth it.

Bottomline - Since Microsoft SLA (99.95%) is not valid for single instance deployment in terms of availability, keeping atleast two instances of your role will save lot of time and headache.

On a side note, there are several other benefits to maintaining atleats two or more instances. For example, a single instance (and therefore the entire role ) can become unavailable whenver the role instance is being restarted by Azure. Azure can restart a role instance for many different reasons, including:
a)      Role instance is being recycled by Load Balancer
b)      Load Balancer Issues
c)      Being Upgrade to a new OS version
d)      Being re-booted to apply a patch or to resolve some other issue

Following blog from Toddy is a good list of other issues that may be causing your roles to be stuck in  Initiatilizing, Busy or Stopping State: http://blog.toddysm.com/2010/01/windows-azure-deployment-stuck-in-initializing-busy-stopping-why.html

Hope this  helps!
Piyush

Sunday, November 07, 2010

SQL Azure Federation: Horizontal Scaling in Cloud !!

  One of the concerns I hear a lot about Azure is the need for users to select the DB size when signing up for a SQL Azure instance. The maximum DB size that you can sign up for is 50 GB currently, and  makes lot of peopel worried about the scalability of SQL Azure. 50GB may be good enough for most of small and medium size web applciations, it's nowhere near what many large websites, LOB applications and data warehouses need.

     Microsoft's solution to this problem was something called  "Sharding".  Sharding is a technique that has been in existence for long time now and supports horizontal partitioning of Databases. Essentailly, it requires you to create a bunch of Azure DBs, treat each one as a separate partition, and programmatically direct your query to the correct partition “shard”. If it's a complex query, you will have to do the hard work of breaking it up based on your partition key, and redirect to right partition, and merge the result back.
   Here is one article explains how to sclae out SQL Azure using Horizontal partitioning. The partitioning logic is implemented in Data Access Layer  using LINQ. Painful!!

  Although this solution works, but expecting developers to write their own  logic to manage partitions, redirect queries, etc is little over the edge. Since most of the on-premise DBs provide this feature out of box, it was a big hurdle in SQL Azure adoption by large companies. Microsoft unvieled a much elegant solution duirng PDC.
   It's called "SQL Azure Federation" . It is planned to be released in early 2011.

   SQL Azure will provide support for explicit horizontal partitioning complete with support for new T-SQL keywords and commands like CREATE/USE/ALTER FEDERATION and CREATE TABLE...FEDERATE ON. Once you have setup the right federation key, re-directing queries to correct "shard" is taken care by the SQL Azure Engine.

  This new feature essentially makes the 50GB size limitation almost irrelevant for most of the data storage requirements. This coupled with the elastic "provisioning" nature of the cloud will make it a compelling alternative for many organizatiosn out there who are dealing with large datasets and scalability issues. I, for one, cannot wait to try this out.

Here is the actual session by "Lev Novik" from the PDC titled "Building Scale-Out Database Solutions on SQL Azure":

Cheers!!