Sunday, July 15, 2012

Moore's Law can fail for the first time in History of Computing Power Growth!

Moore's Law is quite a famous and well known theory that one of the Intel's founding father's Gordon.E.Moore proposed. It said that processors size will be reduced and processing power will continue to rise every 2 years as the number of transistors on integrated circuits doubles approximately every two years. I have seen this law prove itself in all these years I grew up seeing hardware/software evolve. 

Intel already have reached 22 nanometers scale size for the transistor. But now the challenge that has come up that how transistors can be made to so that they continuously shrink in size. All the nanolithography and  etching technologies that were evolved for the same seems to have been saturated and can no longer shrink the size of the transistors. Intel and other players in the market are already behind schedule to reduce the size to 18 nanometers and then 11 nanometers. According to a latest article in MIT Tech Review Magazine the same problem is being stated as a major concern for chip manufacturing companies worldwide.

Intel has tied up with ASML a nanoprecision intruments making company to evolve a new technology called EUV  lithography i.e. Extreme Ultraviolet Lithography. This technology holds the potential to further reduce the transistors size using ultraviolet light. But a major hurdle in this is the loss of light due to absorption of the light energy by the surrounding vacuum based systems. Whatever light is being thrown over a silicon compound is being lost and only 90% of it reaches the chip wafer. ASML is being largely funded by Intel to eradicate this problem and develop a solution so that a large number of chips can be manufactured at cheap costs. ASML's CFO has also urged other competitors in market like Samsung and Taiwan Semiconductor Manufacturing Company to tie-up with partners in order to let the semiconductor lithography technology evolve and prevent the failing of Moore's Law for the first time in history. 


Currently as per all of the nano lithography technologies available for manufacturing processor chips at 22 nanometers level is not enough to bring the dimension further below atleast for 3 years in future. This will foresee a slight fall in the progress of processing power obtained. This would be a first instant since the discovery of transistors and Integrated Circuits that Moore's law pace will be reduced. Hoping that a new technology breakthrough in EUV lithography comes sooner. So that we are not devoid of growing processing power in future.

Saturday, July 7, 2012

Conventional Datacenter versus Cloud Offerings

Conventional DataCenter comprises of basic storage and processing components, unlike the cloud which not only has all the features of a conventional datacenter but also integrated web analytics plus development and testing frameworks all present under a single architectural setting. 

Conventional datacenters are always build from scratch as per the very specific requirements like if a business wants to host a large consumer base Java based website. Then they can only establish a dedicated static server plus can develop the site using the same server capabilities and host it there and then. 

But what are the disadvantages: First is that a specific setup done on a configurated datacenter will always require add-ins like load balancers, indexers, web analytics tools, development frameworks installed with lots of custom settings and secondly setting up of an identical setup so that whatever the business desires to launch can be tested well.

All these hassles of detailed architectural insight and configurations helps in making a very precise datacenter that can exactly suit a requirement. But the cost of all this to be done comes to be huge as multiple vendors or multiple areas comes into feature which all needs to be looked into very carefully so that issues like spamming, virus attacks, scalability, high performance based delivery, excessive load handling, cache built-up for fast accessibility and search can be done. Moreover the person or business involved has to always keep the payments upfront even if the hosted product/service/site is not used by proposed consumers 365X24X7. Maintenance of the servers, code upgradation, utilization of web based analysis has to be continuously monitored and these all comes with cost involved which is always irrespective of the usage patterns. 

Whereas on the contrary in a cloud based environment we already get most of the ''extra'' features like integrated scalability, global accessibility, high performance based delivery, analytics availability, and most important of all the security always enabled, protecting the underlying the products/services. A person/business always has to pay for what he has used in terms of processing power, storage, bandwidth, analytics derived, development or testing licenses used etc. 

One more added feature is being implemented in cloud that nowadays give it the power of configuration as per the cloud hosting users i.e. we can setup our own work environment over a layer of preconfigured setups like for Windows we have Windows Devlopment Frameworks, Database, Web Services and Test management tools present all in Microsoft based technologies so that business can simply come and fix the architectural components available as per the needs and start developing, hosting or testing any product. This feature provided by respective cloud vendors have enabled the users to used any kind of web technology base and start their utilization without getting bothered too much about the base/ground level architectural designing and setup. Windows is just one example if a business wishes to have Google based APIs setup then they can use Google cloud services. If they wish to use Open Source then Amazon is there, if any CRM based application is to be developed then Salesforce is there and many more such wonderful combinations are available for the product/service industry. 

The presence of web analytics all the time in the cloud helps derive the usage statistics across various geolocations and it proves to be boon if we are trying to read some kind of usage patterns across a variety of demographic locations globally. Whereas if we need to get the same capabilities in a conventional datacenter then a business would require a group of experts in all these fields who can actually setup the same and make  all these components available at all the time to the business clientele groups. 

A conventional datacenter base is very good still if we are involved into evolving a new kind of architecture where a lot of hardware plus software plus firmware based experimentation is to be done which involves frequent revamping or playing up with each of the configurable components. In a cloud based environment we cannot experiment with their basic architecture as then all of the setup can be jeopardized. Cloud layers will be intruded and the impact that they might inflict can still be a rather interesting topic to study on :)

Also if we are trying to develop an application that forms a middle or back end component and which never has to be scaled up or downgraded or its performance simply don't affect the overall functioning of the system architecture then also a conventional datacenter is best suited. For example if we think on the lines of service providing computations for mathematical simulations or computational modeling etc. As in these kind of requirements we do not need rapidly scalable hardware or software requirements not it is to be used by a large number of users whose numbers vary from peak to low or vice versa or just exponentially. Unless the simulations are for testing these requirements itself.

So a new business model can arrive where a cloud provider can allow the developers or business to configure some of the basic architectural features of their cloud without impacting the already hosted business services/products. This is a challenging business solution but still if somebody can pitch in with a revolutionary idea/model/plan it can be more suited to business domains like banking where security is to be handled much more seriously and analytical components can be detached, communications layer can be replaced by ESB layer and so on and so forth.

Next promising business in cloud domain will be a cloud architecture very similar to Hyper-Hybrid cloud architecture just the difference will be in this business a single private/public cloud architectural components will be configurable rather the integration of services from various other hybrid clouds. 






Bad practices for a Software Test Engineer

People mostly talk about the good and the best practices but it is also important to know the worst or the bad practices which a Software ...