Conventional DataCenter comprises of basic storage and processing components, unlike the cloud which not only has all the features of a conventional datacenter but also integrated web analytics plus development and testing frameworks all present under a single architectural setting.
Conventional datacenters are always build from scratch as per the very specific requirements like if a business wants to host a large consumer base Java based website. Then they can only establish a dedicated static server plus can develop the site using the same server capabilities and host it there and then.
But what are the disadvantages: First is that a specific setup done on a configurated datacenter will always require add-ins like load balancers, indexers, web analytics tools, development frameworks installed with lots of custom settings and secondly setting up of an identical setup so that whatever the business desires to launch can be tested well.
All these hassles of detailed architectural insight and configurations helps in making a very precise datacenter that can exactly suit a requirement. But the cost of all this to be done comes to be huge as multiple vendors or multiple areas comes into feature which all needs to be looked into very carefully so that issues like spamming, virus attacks, scalability, high performance based delivery, excessive load handling, cache built-up for fast accessibility and search can be done. Moreover the person or business involved has to always keep the payments upfront even if the hosted product/service/site is not used by proposed consumers 365X24X7. Maintenance of the servers, code upgradation, utilization of web based analysis has to be continuously monitored and these all comes with cost involved which is always irrespective of the usage patterns.
Whereas on the contrary in a cloud based environment we already get most of the ''extra'' features like integrated scalability, global accessibility, high performance based delivery, analytics availability, and most important of all the security always enabled, protecting the underlying the products/services. A person/business always has to pay for what he has used in terms of processing power, storage, bandwidth, analytics derived, development or testing licenses used etc.
One more added feature is being implemented in cloud that nowadays give it the power of configuration as per the cloud hosting users i.e. we can setup our own work environment over a layer of preconfigured setups like for Windows we have Windows Devlopment Frameworks, Database, Web Services and Test management tools present all in Microsoft based technologies so that business can simply come and fix the architectural components available as per the needs and start developing, hosting or testing any product. This feature provided by respective cloud vendors have enabled the users to used any kind of web technology base and start their utilization without getting bothered too much about the base/ground level architectural designing and setup. Windows is just one example if a business wishes to have Google based APIs setup then they can use Google cloud services. If they wish to use Open Source then Amazon is there, if any CRM based application is to be developed then Salesforce is there and many more such wonderful combinations are available for the product/service industry.
The presence of web analytics all the time in the cloud helps derive the usage statistics across various geolocations and it proves to be boon if we are trying to read some kind of usage patterns across a variety of demographic locations globally. Whereas if we need to get the same capabilities in a conventional datacenter then a business would require a group of experts in all these fields who can actually setup the same and make all these components available at all the time to the business clientele groups.
A conventional datacenter base is very good still if we are involved into evolving a new kind of architecture where a lot of hardware plus software plus firmware based experimentation is to be done which involves frequent revamping or playing up with each of the configurable components. In a cloud based environment we cannot experiment with their basic architecture as then all of the setup can be jeopardized. Cloud layers will be intruded and the impact that they might inflict can still be a rather interesting topic to study on :)
Also if we are trying to develop an application that forms a middle or back end component and which never has to be scaled up or downgraded or its performance simply don't affect the overall functioning of the system architecture then also a conventional datacenter is best suited. For example if we think on the lines of service providing computations for mathematical simulations or computational modeling etc. As in these kind of requirements we do not need rapidly scalable hardware or software requirements not it is to be used by a large number of users whose numbers vary from peak to low or vice versa or just exponentially. Unless the simulations are for testing these requirements itself.
So a new business model can arrive where a cloud provider can allow the developers or business to configure some of the basic architectural features of their cloud without impacting the already hosted business services/products. This is a challenging business solution but still if somebody can pitch in with a revolutionary idea/model/plan it can be more suited to business domains like banking where security is to be handled much more seriously and analytical components can be detached, communications layer can be replaced by ESB layer and so on and so forth.
Next promising business in cloud domain will be a cloud architecture very similar to Hyper-Hybrid cloud architecture just the difference will be in this business a single private/public cloud architectural components will be configurable rather the integration of services from various other hybrid clouds.
No comments:
Post a Comment