Showing posts with label Cloud-General. Show all posts
Showing posts with label Cloud-General. Show all posts

Monday, June 27, 2011

Cloud Computing Security Standards

As Cloud Computing gains traction both among enterprises and consumers, security on the cloud still commands the greatest mind share when talking about reluctance in cloud adoption. While enterprises question security of their data and information, consumers are concerned about privacy related issues. Cloud Computing vendors are under tremendous pressure to demonstrate their commitment to address these hot buttons of their customers. In this context it pays for all stakeholders to be aware of some of the prevalent and widely accepted Security standards, the adoption of which helps alleviate some of the security concerns and push for greater cloud adoption.

SAS 70 - Statement on Auditing Standards No 70

What is it?

A well recognized auditing standard put in place by the American Institute of Certified Public Accountants (AICPA)

What does it do?

Modern data centers and hosting providers have to deal with their customers' data being processed or residing on their servers and storage devices. SAS 70 audit checks if the necessary safeguards and controls are in place at the data centers to ensure safety of customers' data.

Who asks for it?

Customers who want to enter into contracts with data centers, website hosting providers, cloud computing infrastructure providers typically enquire about SAS 70 compliance.

More Info @ http://www.sas70.com/

PCI-DSS - Payment Card Industry- Data Security Standard

What is it?

A defined standard by the PCI Security Standards Council that defines the needed protection to be put in place to ensure data safety while dealing with digital payments involving cards and information provided therein.

What does it do?

The standard framework specifies requirements for security management, policies, procedures, network architecture, software design and other aspects while dealing with card related information leading to digital payments. It specifies 12 requirements to be put in place. To ensure compliance a continuous 3 step process has to be established

  • Assess: Take stock of your IT assets and business processes for payment card processing and analyze them for vulnerabilities that could giveaway cardholder data
  • Remediate: Fix the revealed vulnerabilities
  • Report: Generate records as specified by PCI DSS to validate remediation. Also submit compliance reports to the financial enterprises that you do business with.

Who asks for it?

Customers who want to enter into contracts with data centers, website hosting providers, cloud computing infrastructure providers typically enquire about SAS 70 compliance.

More Info @ https://www.pcisecuritystandards.org/

ISO 27001

What is it?

A Information Security Management System (ISMS) standard published in October 2005 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC)

What does it do?

The standard attempts to bring a structure around information management in enterprises. As information becomes a key asset for enterprises, the need to define standard processes around Information Management and continually keep refining these became the driving factors for the establishment of this standard. The standard defines a model that covers legal, physical and technical aspects of information management. It is a top down, risk-based approach and is technology-neutral. The model is defined in 6 steps

  • Define a security policy.
  • Define the scope of the ISMS.
  • Conduct a risk assessment.
  • Manage identified risks.
  • Select control objectives and controls to be implemented.
  • Prepare a statement

Who asks for it?

Customers while dealing with cross border transactions are more comfortable in ensuring that information passed on to other organizations are safe and do not fall into the false hands. 

More Info @ http://www.27000.org/

Data Protection Directive (DPD)

What is it?

A set of European Union (EU) regulations that deal with personal data of individuals and their processing & movement.

What does it do?

With privacy laws being some of the most stringent among the European nations, the EU has made the DPD a part of its privacy and human rights laws. This directive governs both automated and non automated processing of data. It assumes significance in the cloud computing scenario as more and more online services require individuals to divulge personal data while subscribing to services.

The EU directive incorporates the seven principles recommended by OECD (Organization for Economic Cooperation and Development) earlier. The seven principles state

  • Notice: Give the individual notice when data is being collected
  • Purpose: State the purpose for which the data is being collected and data collected should be used only for this purpose
  • Consent: Get the individual's consent before disclosing data
  • Security: Ensure data collected is secure from potential misuse
  • Disclosure: Individuals need to be informed on who is collecting their data
  • Access: Individuals should be allowed access to their data and must be allowed to changer erroneous data.
  • Accountability: Individuals should have an ability to hold the data collectors accountable for the above principles

Who asks for it?

EU directive is basically for the member nations who in turn have to enact laws to give the directive legal binding.

More Info @ http://www.dataprotectiondirective.com/

If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Monday, June 13, 2011

Understanding Cloud Computing – 5 – SaaS

In my previous posts on IaaS and PaaS, we covered the building blocks of Cloud Computing. SaaS is the top most layer in our cloud computing stack that rides on top of the power unleashed by the Infrastructure and Platform layers to really deliver value to consumers and enterprises.
SaaS or Software as a Service is quite a buzz word these days. Why so? Is it a new concept?
Not really. SaaS is about hosting a software application on a server and allowing users to use it via Internet connected computers from anywhere in the world. The user need not install the application to start using it on his computer. He/She can just access it as a service over the Internet. Web based email is a basic example of SaaS.
Other more recent examples include photo editing that certain website allow, word document to pdf conversion, Google word processing, spreadsheet applications, etc which you can access through a simple Internet browser and more. If SaaS had been around for so long, then why the buzz now?
Several reasons can be attributed to it
SaaS as a business centered concept
SaaS as a concept has worked successfully for individual centered applications but not business centered applications. There are both technology related and business related reasons for this. While SaaS applications like e-mail, office suites, etc have taken off quite well, business related SaaS applications like CRM (Customer Relationship Management) software, sales force automation software, payroll applications, procurement, logistics software have only started gaining traction now.
Why so?
Technology has matured
  • New software design and delivery models allow multiple instances of an application to run at once
  • Internet bandwidth costs have dropped significantly to allow companies to buy the connectivity necessary to allow the remotely hosted applications to run smoothly
  • Media rich AJAX based UIs that do not go for a full page refresh when you click on a button.
Business customers are realizing the benefits SaaS can offer
  • Delayed deployments and high Total Cost of Ownership are forcing CIOs to look away from the traditional software delivery format.
  • Business customers are frustrated with endless cycles of buying software licenses, paying for maintenance contracts, unresponsive helplines, costly upgrades, etc.
  • Pay-as-you-go benefits
  • Easy add ons
  • Easy ability to switch vendors if current vendor is unresponsive to business problems
  • No software maintenance headaches
And add to this the early successes that the world is seeing in early pioneers of SaaS like Salesforce.com, WebEx, Digital Insight, etc. The model has proven viable. We need to wait and see how the trends in SaaS unfold.
If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Sunday, June 12, 2011

Understanding Cloud Computing – 4 – PaaS

Platform as a Service corresponds to the second layer in my analogy of cloud computing to your commonplace desktop at home.
PaaS – Platform as a Service is akin to an operating system that allows application developers, programmers and the like to install their language support systems, write and test code, package and distribute and finally deploy/install them to render the apps usable by end customers.
Cloud - PaaS
The difference lies purely in the ‘as a Service’ aspect. The platform in the case of cloud computing context is not tied down to a Operating System – rather it is something that is hosted on the cloud and available on demand to developers and programmers via any machine connected to the internet. The developed requests the environment and the same gets provisioned to him over the cloud.
PaaS also follows the 4 tenets of Cloud Computing.
Examples of PaaS platforms include Azure from Microsoft, SalesForce’s Force.com, Google’s AppEngine.
We will explore SaaS in our next part in this series.
If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Monday, June 6, 2011

US Fed extracting the juice out of the cloud

If ever there was to be a generic case study sought by enterprises seeking to leverage cloud computing, there can’t be a better one than the US Federal Government whose “Cloud First” policy.

US Govt cloud

Two key lessons that come out crystal clear from US government’s experiments with cloud computing

1. Move the essentials but non-core items to the cloud. Focus on the low hanging fruit first.

Case in Point 1: US government has saved $40 million a year by moving e-mail services for General Services Administration (GSA) and Dept. of Agriculture.

Case in Point 2: The Recovery Accountability and Transparency Board have saved $750,000 by moving to Amazon Web Services’ cloud-computing infrastructure; a move started in May 2010. About 100 data centers nationwide are closed this year  The government has an ultimate goal of shuttering 800 data centers by 2015.

2. Move those apps to the cloud that can take advantage of at least 2-3 basic tenets that cloud computing promises.

Case in Point 1: GSA moved its website to the cloud thus ensuring that website content could be updated in hours instead of days and weeks. This allows the staff to turn to other tasks rather than site maintenance. This move alone is supposed to save $1.7 billion to the US tax payer. The website application took advantage of the cloud’s on-demand ability to scale up/down and also lent itself to a pay-go model for GSA to pay for the support and maintenance of the website.

If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Thursday, June 2, 2011

Gang of Four on the Cloud

Eric Schmidt, the former CEO of Google, has called his company belonging to a Gang of Four that is revolutionizing the Consumer activity on the Internet and Cloud today.

  • Google with its Search
  • Facebook with its Social Site
  • Amazon with its E-Commerce site
  • Apple with its Devices

This sure has led to a comparison with the former Gang of Four as per TechCrunch viz., Microsoft, Intel, Cisco and Dell formerly.

If I were to take a swipe to extend the Gang of Four to make it a Gang of Six, here’s my choices

  • Microsoft with its Enterprise Offerings
  • VMware with its Infrastructure Offerings

Do you have anything to extend this list?


If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Friday, April 29, 2011

SETI - Shutting down for lack of funds


Remember SETI? The ambitious 'Search for Extra Terrestrial Intelligence' program which probably was also the first of its kind public program that used distributed computing power. 

SETI program sought to use the power of millions of distributed home/office computers to process the tonnes of data that a set of radio telescopes named the Allen Telescope Array in Hat Creek, California, USA collected day in and day out. Users who had their systems connected the World Wide Web could install a small client software that would take bits of data by talking to a SETI remote server , analyse this data when the computer processor was idle and send the data back to the central server.

What's disheartening is that the program is being stalled due to lack of funds. Started way back in 2007, the array of 42 radio dishes scanned deep space for possible signals that are indicative of communication from intelligence life. The SETI institute itself was founded in 1984 with NASA funding.

The $50 million array was built by SETI and UC Berkeley with a $30 million donation from Microsoft Corp. co-founder Paul Allen, being the biggest chunk of money. Operating the dishes costs about $1.5 million a year, mostly to pay for the staff of eight to 10 researchers and technicians to operate the facility.An additional $1 million a year is needed to collect and sift the data from the dishes.

If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Tuesday, April 26, 2011

Traditional Hosting versus Cloud Hosting - 3 Differences

Much too often, the question on what is the difference between hosting a website on a remote server and on a cloud is - crops up during passionate discussions on cloud computing. Truly geeky friends of mine often forget the fundamental differences between cloud hosting and non-cloud hosting and start wondering if cloud computing is just a fad that was invented to pull the wool over unsuspecting enterprises. To all those friends of mine and to those of you who really are puzzled over the differences, here's a beautiful video that settles the matter for once. You can bookmark this URL for the subsequent times...;-)


If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Saturday, April 23, 2011

Understanding Cloud Computing - 3 - IaaS

The Infrastructure layer of cloud computing is the base foundation of cloud computing. This is the layer that triggered off the thoughts of true cloud computing.Additional hardware purchased by divisions of companies that remained unused must have prompted other divisions to request for this hardware on a temporary basis. That action is what sparked off the thought of subscribing to hardware on demand and not buying and owning it.

Hosting companies that provided space on remote servers took the first step in this direction. They were prompted more from the need for the servers to be exposed to public at all times unlike enterprise servers that are within the firewall. This also freed up enterprises from having to invest and maintain open-to-public servers. They just put up their websites and related public consumable data onto these hosted servers. 

It was exactly 5 years ago that the next big step in IaaS was taken. It was by a company named Amazon (more famous for it online book store). AWS or Amazon Web Services (started in July 2002) announced the availability of its EC2 - Elastic Compute Cloud offering in August 2006. EC2 allowed users to rent out computing power and pay for it by the hours of usage. EC2 allowed for users to load their applications onto Amazon hosted infrastructure and EC2 related services allowed for scaling up or down the needed computing and storage power based on the consumption of these applications.

Today you have a whole set of companies trying to match what AWS brought to the market. GoGrid, Rackspace, Akamai, etc are among the few top ones. 

You could safely say that the current trends and gung-ho around cloud computing had its seeds sowed back in 2006. In our next post, lets dive into Platform as a Service facet of Cloud Computing.

If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Monday, February 21, 2011

Gartner’s 3 predictions for the cloud…

Gartner has released its 2011 predications for cloud computing

  1. More than half of an enterprise’s transactions will be done via cloud based infrastructure over the next 4 years - based on a survey of Chief information officers (CIOs).
  2. In the Asia-Pacific, around 40 percent of enterprises with more than 1,000 employees will invest in cloud this year.
  3. Software as a Service (SaaS) is also projected to grow from 9% of total enterprise applications software spending in 2009 to 14% in 2014. SaaS enterprise application revenues will also more than double during this timeframe, or 15.3% of compound annual growth rate (CAGR) between 2009 and 2014.

Tuesday, January 11, 2011

IT Needs Then and Cloud Needs Now....

When the digital computer age was ushered in way back in the 1960s, IBM ruled the roost with System/360, a large mainframe offering. Over a decade and half we saw the emergence of Digital Equipment Corporation (DEC) with PDP and VAX systems, Hewlett Packard with HP-2115, Data General with its Novas. Each company had its own proprietary hardware stack and languages they supported. While DEC supported UNIX which was in its infancy then, HP supported Fortran and Algol and IBM had its own proprietary mainframe language. 
Enterprise and consumer needs drove adoption of these different stack each of which were best suited for a particular segment. Over the next 2-3 decades, enterprises suddenly realized that the computer industry had left them with a spaghetti of systems none of which were inter operable or supported cross talk. And this after millions of dollars had been poured to procure these systems. The anguish was so pronounced that it drove some of the legacy providers to extinction while others had to learn to dance to survive. Louis Gerstner, the erstwhile CEO of IBM, in his book, "Who Says Elephants Can't Dance?"  gives an account of IBM's historic turnaround between April 1993 and March 2002. Lou Gerstner led IBM from the brink of bankruptcy and mainframe obscurity back into the forefront of the technology business. He did so by reorienting the company's business to the demands of the time. One of the main cornerstones he lay was the establishment of IBM Global Services - a System Integration division whose main objective was to help enterprises stitch together the multitude of computer systems they had invested in and get them to co-work. Thus was born the huge SI industry that has seen the likes of IBM, Accenture, EDS, CapGemini and the Indian majors like TCS, Wipro, Infosys, HCL, Cognizant drive business.
The reason I cited a snippet from history is to demonstrate two things
  1. To draw a comparison between the digital computing era and the cloud computing world as it is evolving
  2. To give the reader an indication of how transformations in the industry happen over years and decades.
Cloud computing industry was a buzz word for most part of the first decade in the new millennium. The fag end of the decade saw the emergence of Amazon Web Services, Google Apps, Microsoft Azure, Go Grid, SalesForce's Force.com as some prime cloud players. We also see a lot of smaller players who serve niche needs like Service Mesh, Zuora, JamCracker, Ping Identity, etc who complement the bigger players. However, the biggest lacuna is the interoperability of clouds. Users choosing a cloud provider do not have a seamless path to migrate to another cloud.provider. What you see is a picture similar to the erstwhile era. A rising need felt by enterprise customers to ensure a fair degree of standards and interoperability between clouds. What that also means is the need for companies that would help enterprises achieve it. Whether this set of players would come from the the big league or from the small niche of players remains to be seen...What's your take on this matter?
If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Monday, December 27, 2010

Understanding Cloud Computing - 2

Remember the analogy I presented to you in my last post in the series? About the cloud being akin to a giant computer. We also talked about the 3 layers of this computer. What was interesting was the fact that each of them had the "as a Service" suffix. What does this mean and what are its implications. 

Extending out giant computer analogy a little further, it is certain that such a giant computer is not for any individual or institution's use alone. Rather it is a computer that can be used by everybody who has a need for the same. Theoretically, the computer is so huge that every body could make use of it at different points of time or at the same time. While somebody might be needing just a few GB of hard disk space on this computer and some processing power, another team might be thinking of requesting several TB of space and lots of RAM and some specific applications. As each and every individual/team requests these units from the computer, these are allocated almost in real time to these individuals. Once they are done using the computer for their use, they relinquish the requested resource units that get added to the main pool for somebody else to take advantage of them. Hence the giant computer's resources are available on demand as a service. Just like how a taxi is available as a service for you when you need to get from one point to another or your when your own car has broken down. 

All three layers of the cloud, the infrastructure layer, the platform layer and the software layer are available as a service for its consumers. Hence  the suffix "as a service".

In our next post in this series, lets look at the Infrastructure layer in detail and understand its intricacies

Thursday, December 9, 2010

Understanding Cloud Computing - 1

A lot of my readers ask me fundamental questions on cloud. This series of posts are dedicated to all my fans and readers who would like cloud computing to be simplified for them.


Let's start with a pictorial analogy of cloud computing to something that we are more familiar with. Consider a server or even your desktop computer or a laptop. A scratch at the surface to see what makes these computing devices tick reveals 3 distinct layers.

  1. The hardware consisting of the motherboard, the memory (RAM, hard disks), the processor, etc
  2. The operating system that loads when you first switch on your computer - Windows, Linux, Unix, etc
  3. The software applications that you use when using your computer - Notepad, Word, Paint Brush, Winamp, Internet Explorer, etc
A cloud in simple terms is your computer multiplied sever hundred thousand times and can go beyond theoretically to infinity. More on this characteristic in my next post.

Coming back to the analogy, imagine the cloud to be one giant computer. Obviously you would need the equivalent of the 3 layers here too, right?
  1. Infrastructure - IaaS (Infrastructure as a Service) - The hardware layer equivalent
  2. Platform - PaaS (Platform as a Service) - The Operating system equivalent with more bells and whistles. Lets understand that in a separate post
  3. Software - SaaS (Software as a Service) -The software applications you would use to draw on the power of the giant cloud computer.
You might have noticed that all three had the terms "as a Service" attached to them. Let's explore that in our next post in this series.

Friday, November 19, 2010

Grids and Clouds - The fundamental differences

Quite often I get the query on what is the difference between a grid and a cloud. Well, lets try and net the differences.

The grid is a 'pre-cloud' terminology. However it distinctly stands for setups by academic groups who wanted to resolve problems that required crunching of large number sets. Typical usages involve satellite image crunching, weather pattern analysis, analyzing data from nuclear experiments or other extreme physics, number crunching to resolve mathematical conjectures, etc.

The objective of setting up these grids was primarily to assemble several high performing servers and get them to work in parallel. Hence you also get to see the term High Performance Computing (HPC) being used in the context of grids. Some of the best know grids are also recognized as the world's best supercomputers.

This is where today's public clouds stand to differ. They are setup to achieve scalability as against performance that grids were setup for. Assemble enough of commoditized infrastructure to help enterprises offload work and data onto these systems. Providing platforms for High Scale Computing (HSC) and not high Performance Computing has been the driver for clouds of today. 

Clouds are designed to take on problems that are in cloud parlance described as "Embarrassingly Parallel Problem"(EPP) Consider a problem that involves finding out the number of times the word "Apple" has been repeated in all of Encyclopedia Britannica's content. A problem whose solution can be approached a typical EPP fashion. Divide and distribute the problem into as many words as there are in the encyclopedia and then funnel them all into a routine that runs in parallel to check if the word matches with "Apple". Each parallel routine returns a 1 or 0. A final counter gets updated as each routine returns its results. A massively parallel approach to solve the problem in minutes that otherwise might have taken days if not months for a stand alone system to solve. All achieved by utilizing the HSC aspect of today's clouds.


A HPC system would rather have a highly complex routing running in each node and one that might also involve certain nodes to continue their crunching based on outputs of few other nodes and vice versa. HPC works on a different dimension.


Never miss a post...

If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post