Circuitous windings in thought

June 8, 2011

Crowdsourcing Discussion at Caltech | June 11, 2011

Filed under: Customer, Events — David Chou @ 2:39 pm

Crowdsourcing, as a method of leveraging the massive pool of online users as resources, has become a viable set of techniques, tools, and marketplaces for organizations and entrepreneurs to drive innovation and generate value. It is applied effectively in a variety of explicit/implicit, and systematic/opportunistic models: collective intelligence (Wikipedia, Yelp, and Twitter analytics); collaborative filtering (Amazon’s product and Netflix’s movie recommendation engines); social tagging (del.icio.us, StumbleUpon, and Digg); social collaboration (Amazon’s Mechanical Turk); and crowdfunding (disaster relief, political campaigns, micropatronage, startup and non-profit funding, etc.)

An entrepreneur can utilize crowdsourcing tools for funding and monetization, task execution, and market analysis; or implement crowdsourcing as a technique for data cleansing and filtering, derived intelligence, etc. However, leveraging crowdsourcing also requires an entrepreneur to navigate a complex landscape of questions. How is it different from outsourcing? Is it truly cost-efficient? What motivates individual contributions? How to grow and sustain an active community? How to ensure quality or service level? What are the legal and political implications? Join us for a program which explores such questions, and the use of crowdsourcing to match specific needs to an available community.

Come interact with the local community, and an esteemed group of speakers which includes Peter Coffee (VP, Head of Platform Research at
Salesforce.com), Dana Mauriello (Founder and President at ProFounder), Michael Peshkam (Founder and CEO at IamINC), Nestor Portillo (Worldwide Director, Community & Online Support at Microsoft), Arvind Puri (VP, Interactive at Green Dot), and Alon Shwartz (Co-Founder & CTO at Docstoc.com).

June 11 9am-11am. Visit the website for more details and registration information – http://www.entforum.caltech.edu/

Cross-posted from my blog at http://blogs.msdn.com/dachou

March 14, 2011

Cloud-optimized architecture and Advanced Telemetry

Filed under: Architecture, Azure, Cloud Computing, Customer — David Chou @ 7:37 pm

Advanced TelemeryOne of the projects I had the privilege of working with this past year, is the Windows Azure platform implementation at Advanced Telemetry. Advanced Telemetry offers an extensible, remote, energy-monitoring-and-control software framework suitable for a number of use case scenarios. One of their current product offerings is EcoView™, a smart energy and resource management system for both residential and small commercial applications. Cloud-based and entirely Web accessible, EcoView enables customers to view, manage, and reduce their resource consumption (and thus utility bills and carbon footprint), all in real-time via the intelligent on-site control panel and remotely via the Internet.

image

Much more than Internet-enabled thermostats and device end-points, “a tremendous amount of work has gone into the core platform, internally known as the TAF (Telemetry Application Framework) over the past 7 years” (as Tom Naylor, CEO/CTO of Advanced Telemetry wrote on his blog), which makes up the server-side middleware system implementation, and provides the intelligence to the network of control panels (with EcoView being one of the applications), and an interesting potential third-party application model.

The focus of the Windows Azure platform implementation, was moving the previously hosted server-based architecture into the cloud. Advanced Telemetry completed the migration in 2010, and the Telemetry Application Framework is now running in Windows Azure Platform. Tom shared some insight from the experience in his blog post “Launching Into the Cloud”. And of course, this effort was also highlighted as a Microsoft case study on multiple occasions:

 

The Move to the Cloud

As pointed out by the first case study, the initial motivation to adopt cloud computing was driven by the need to reduce operational costs of maintaining an IT infrastructure, while being able to scale the business forward.

“We see the Windows Azure platform as an alternative to both managing and supporting collocated servers and having support personnel on our side dedicated to making sure the system is always up and the application is always running,” says Tom Naylor. “Windows Azure solves all those things for us effectively with the redundancy and fault tolerance we need. Because cost is based on usage, we’ll also be able to much more accurately assess our service fees. For the first time, we’ll be able to tell exactly how much it costs to service a particular site.”

For instance, in the Channel 9 video, Tom mentioned that replicating the co-located architecture from Rackspace to Windows Azure platform resulted in approximately 75% cost reduction on a monthly basis in addition to other benefits. One of the major ‘other’ benefits is agility, which arguably is much more valuable than the cost reduction normally associated with cloud computing benefits. In fact, as the second case study pointed out, in addition to breaking ties to an IT infrastructure, Windows Azure platform become a change enabler that supported to shift to a completely different business model for Advanced Telemetry (from a direct market approach to that of an original equipment manufacturer (OEM) model). The move to Windows Azure platform provided the much needed scalability (of the technical infrastructure), flexibility (to adapt to additional vertical market scenarios), and manageability (maintaining the level of administrative efforts while growing the business operations). The general benefits cited in the case study were:

  • Opens New Markets with OEM Business Model
  • Reduces Operational Costs
  • Gains New Revenue Stream
  • Improves Customer Service

Cloud-Optimized Architecture

However, this is not just another simple story of migrating software from one data center to another data center. Tom Naylor understood well the principles of cloud computing, and saw the value in optimizing the implementation for the cloud platform instead of just using it as a hosting environment for the same thing from somewhere else. I discussed this in more detail in a previous post Designing for Cloud-Optimized Architecture. Basically, it is about leveraging cloud computing as a way of computing and as a new development paradigm. Sure, conventional hosting scenarios do work in cloud computing, but there is more value and benefits to gain if an application is designed and optimized specifically to operate in the cloud, and built using unique features from the underlying cloud platform.

In addition to the design principles around “small pieces, loosely coupled” fundamental concept I discussed previously, another aspect of the cloud-optimized approach is to think about storage first, as opposed to thinking about compute. This is because, in cloud platforms like Windows Azure platform, we can build applications using the cloud-based storage services such as Windows Azure Blob Storage and Windows Azure Table Storage, which are horizontally scalable distributed storage systems that can store petabytes and petabytes of data and content without requiring us to implement and manage the infrastructure. This is in fact, one of the significant differences between cloud platforms and traditional outsourced hosting providers.

In the Channel 9 video interview, Tom Naylor said “what really drove us to it, honestly, was storage”. He mentioned that the Telemetry Application Platform currently handles about 200,000 messages per hour, each containing up to 10 individual point updates (which roughly equates to 500 updates per second). While this level of traffic volume isn’t comparable to the top websites in the world, it still poses significant issues for a startup company to store and access the data effectively. In fact, the data required the Advanced Telemetry team to cull the data periodically in order to maintain a relatively workable size for the operational data.

“We simply broke down the functional components, interfaces and services and began replicating them while taking full advantage of the new technologies available in Azure such as table storage, BLOB storage, queues, service bus and worker roles. This turned out to be a very liberating experience and although we had already identified the basic design and architecture as part of the previous migration plan, we ended up making some key changes once unencumbered from the constraints inherent in the transitional strategy. The net result is that in approximately 6 weeks, with only 2 team members dedicated to it (yours truly included), we ended up fully replicating our existing system as a 100% Azure application. We were still able to reuse a large percentage of our existing code base and ended up keeping many of the database-driven functions encapsulated in stored procedures and triggers by leveraging SQL Azure.” Tom Naylor described the approach on his blog.

The application architecture employed many cloud-optimized designs, such as:

  • Hybrid relational and noSQL data storage – SQL Azure for data that is inherently relational, and Windows Azure Table Storage for historical data and events, etc.
  • Event-driven design – Web roles receiving messages act as event capture layer, but asynchronously off-loads processing to Worker roles

Lessons Learned

In the real world, things rarely go completely as anticipated/planned. And it was the case for this real-world implementation as well. 🙂 Tom Naylor was very candid about some of the challenges he encountered:

  • Early adopter challenges and learning new technologies – Windows Azure Table and Blob Storage, and Windows Azure AppFabric Service Bus are new technologies and have very different constructs and interaction methods
  • “The way you insert and access the data is fairly unique compared to traditional relational data access”, said Tom, such as the use of “row keys, combined row keys in table storage and using those in queries”
  • Transactions – initial design was very asynchronous; store in Windows Azure Blob storage and put in Windows Azure Queue, but that  resulted in a lot of transactions and significant costs based on the per-transaction charge model for Windows Azure Queue. Had to leverage Windows Azure AppFabric Service Bus to reduce that impact

The end result is a an application that is horizontally scalable, allowing Advanced Telemetry to elastically scale up or down the deployments of individual layers according to capacity needs, as different application layers are nicely decoupled from each other, and the application is decoupled from horizontally scalable storage. Moreover, the cloud-optimized architecture supports both multi-tenant and single-tenant deployment models, enabling Advanced Telemetry to support customers who have higher data isolation requirements.

Cross-posted from my blog at http://blogs.msdn.com/dachou

May 5, 2010

Free Training – Microsoft Web Camps in Mountain View on May 27 & 28, 2010

Filed under: Customer, Web — David Chou @ 9:48 pm

clip_image002

The Microsoft Web Team is excited to announce a new series of events called Microsoft Web Camps!

function WebCamps () {
   Day1.Learn();
   Day2.Build();
}

Interested in learning how new innovations in Microsoft’s Web Platform and developer tools like ASP.NET 4 and Visual Studio 2010 can make you a more productive web developer? If you’re currently working with PHP, Ruby, ASP or older versions of ASP.NET and want to hear how you can create amazing websites more easily, then register for a Web Camp near you today!

Microsoft’s Web Camps are free, two-day events that allow you to learn and build on the Microsoft Web Platform. At camp, you will hear from Microsoft experts on the latest components of the platform, including ASP.NET Web Forms, ASP.NET MVC, jQuery, Entity Framework, IIS, Visual Studio 2010 and much more.   See the full agenda here.

Register now and we look forward to seeing you at camp soon!

 

Mountain View, CA  May 27 & 28

1065 La Avenida
Mountain View, CA 94043
Phone: (650) 693-4000

Map

Speakers

clip_image004Jon Galloway works for Scott Hanselman as an ASP.NET Community Program Manager helping to evangelize and promote the ASP.NET framework and Web Platform. Jon previously worked at Vertigo Software, where he worked on several Microsoft conference websites (PDC08, MIX09, WPC09), built the CBS March Madness video player, and lead a team which created several Silverlight advertising demos for MIX08. Prior to that, he’s worked in a wide range of web development shops, from scrappy startups to Fortune 500 financial companies. He was an ASP.NET and Silverlight Insider, ASP.NET MVP, published author, regular contributor to several open source .NET projects. He runs the Herding Code podcast (http://herdingcode.com) and blogs at http://weblogs.asp.net/jgalloway  

Cross-posted from my blog at http://blogs.msdn.com/dachou

Create a free website or blog at WordPress.com.