About espirl

eSpirl has been an integral part in to develop online Communities of practice. espirl delivers campaigns designed to activate passionate users in online communities of practice. It develops initiatives to promote the community of practices. The initiatives are measurable, predictable, and scalable. it promotes dialogue and innovation. We develop and distribute information that’s consistent, relevant and provides value by broadening the community of practices. Involved in development of a large targeted online subscriber base via online communications strategy and detail communication plans. Incorporates social networking elements as part of its very fabric.

Advantages & Disadvantages of Selenium supported Programming Languages – A summary

             LANGUAGE NO                                                                        ADVANTAGES                                                DISADVANTAGES
                   JAVA 1 Java is distributed.Distributed computing involves several computers on a network working together. 1   java programs runs on a virtual machine.its run slowly compared to other programs
2 In java language compiler,interpreter and runtimes environment were each one developed with security mind 2    some problem in programs do not always correctly even if they  written correctly because a JVM may be written incorrectly.Diffuclt to write a program.
3 In java programming provides multimedia facilities that will enable programmer to develop multimedia application. 3  No separation of specification from implementation and no precondition and postcondition
4 Java Is architecture netural.it is platform independent.
                      C# 1 C# is safer to run. C# program is compiled into an intermediate language, the OS can always check it to see that no malicious code is about 1    C# is less flexible than C++.  C# depends greatly on .NET framework, anything that is not found in the .NET framework will be difficult to implement.
2 Cost of maintenance for C# is  much lower than that of C++.  This is a positive side effect of C# helping programmers to write program that is as bug free as possible. 2   Its doesn’t support multiple inhertaince.
3 C# implements the modern programming concept of Object Oriented Programming which enables the developer to produce secure data applications. 3    C# is slower to run.
4 C# supports effective and reusable components.
                  PYTHON 1 Python does not use any syntax rule instead of tabbing and spacing play an any important role in program flow. 1   Python is interpreted language.so the runtime of the program is 1-5times slower than java or c,c++.
2 Python does not enforce a strict type on containers or variables. Developers can design a container to hold different types of data 2   Python is not best for memory intensive tasks.
3 A program written in python for one platform using only the standard libaries can easily ported to another operating system without need for recompling and repackaging. 3   Python is not great choice for high graphic 3d  game that takes up a lot of cpu.
4 Python is general purpose language.It’s ease of learning,protablity,dynamic typing and integration with other languages. 4    Some other drawbacks are language translation,documentation,and use of modules.
                      RUBY 1 Ruby is a pure oop language.it allows inhertaince,encapuslation,and polymorphism of objects. 1    Ruby is an interpreted language.The source code has to be interpreted at runtime that means it run slower than equivalent compiled application.
2 Ruby is also dynamic language.The methods and variable may added and redefined at runtime. 2   If some one uses your application will also be able to see source code. Its not secure.
3 Ruby also very high level language.That mean it can handle complex data structure,and  complex operation on them with relatively few instruction.
4 It has a smart grabage collector.and it is scripting language.It make easy to do scripting opeartion like examining system resource,using pipes,capture output and so on.
                 PERL 1 It s protablity.perl code that doesnot use system specific features ,can be run on any platform. 1    If you can’t easily create a binary image  from a Perl file. It’s not a serious problem on Unix, but it might be a problem on Windows.
2 Perl makes using composition for code reuse very straightforward. 2   Perl is that of function signatures or rather the lack of signatures. In most programming languages when you declare a function you also declare its signature, listing the names of the parameters and in some languages also their types. Perl doesn’t do this.
3 Its allow multiple inheritance and operator overloading. 3   In perl hard to build data structure
4 Perl provides some features that are required for large projects.That are modularization,object orinted technquies,arbitary data structures. 4   This makes it hard to read even well-written code of programmers who happen to use features you are less familiar with.
                   PHP 1 PHP is open source.It is developed  and maintained by a large group of PHP developers this will helps in creating a  support community, abundant extension library. 1   PHP has security problem. It is open source So all people can see the source code.
2 It is stable.since it is maintained by many number  developers, so when bugs are found,it can be quickly fixed. 2    Not suitable for large applications: Hard to maintain since it is not very modular.
3 You can connect to database easily using PHP since many websites are data or content driven  so we will use database frequently, this will largely reduce the development time of web apps. 3   Not good  to create desktop application.
4 It can be run on many platform. 4   PHP tends to execute more slowly than assembly, C, and other compiled languages

A report on Hadoop

A report on Hadoop

Takeaway: Hadoop has been helping analyze data for years now, but there are probably more than a few things you don’t know about it.

7 Things to Know About Hadoop

Source: Pressureua/Dreamstime.com

What is Hadoop? It’s a yellow toy elephant. Not what you were expecting? How about this: Doug Cutting – co-creator of this open-source software project – borrowed the name from his son who happened to call his toy elephant Hadoop. In a nutshell, Hadoop is a software framework developed by the Apache Software Foundation that’s used to develop data-intensive, distributed computing. And it’s a key component in another buzzword readers can never seem to get enough of: big data. Here are seven things you should know about this unique, freely licensed software.

How did Hadoop get its start?

Twelve years ago, Google built a platform to manipulate the massive amounts of data it was collecting. Like the company often does, Google made its design available to the public in the form of two papers: Google File System and MapReduce.

At the same time, Doug Cutting and Mike Cafarella were working on Nutch, a new search engine. The two were also struggling with how to handle large amounts of data. Then the two researchers got wind of Google’s papers. That fortunate intersection changed everything by introducing Cutting and Cafarella to a better file system and a way to keep track of the data, eventually leading to the creation of Hadoop.

What is so important about Hadoop?

Today, collecting data is easier than ever. Having all this data presents many opportunities, but there are challenges as well:

  • Massive amounts of data require new methods of processing.
  • The data being captured is in an unstructured format.

To overcome the challenges of manipulating immense quantities of unstructured data, Cutting and Cafarella came up with a two-part solution. To solve the data-quantity problem, Hadoop employs a distributed environment – a network of commodity servers – creating a parallel processing cluster, which brings more processing power to bear on the assigned task.

Next, they had to tackle unstructured data or data in formats that standard relational database systems were unable to handle. Cutting and Cafarella designed Hadoop to work with any type of data: structured, unstructured, images, audio files, even text.  Cloudera (Hadoop integrator) white paper explains why this is important:

    “By making all your data usable, not just what’s in your databases, Hadoop lets you uncover hidden relationships and reveals answers that have always been just out of reach. You can start making more decisions based on hard data, instead of hunches, and look at complete data sets, not just samples and summaries.”

What is Schema on read?

As was mentioned earlier, one of the advantages of Hadoop is its ability to handle unstructured data. In a sense, that is “kicking the can down the road.” Eventually the data needs some kind of structure in order to analyze it.

That is where schema on read comes into play. Schema at read is the melding of what format the data is in, where to find the data (remember the data is scattered among several servers), and what’s to be done to the data – not a simple task. It’s been said that manipulating data in a Hadoop system requires the skills of a business analyst, a statistician and a Java programmer. Unfortunately, there aren’t many people with those qualifications.

What is Hive?

If Hadoop was going to succeed, working with the data had to be simplified. So, the open-source crowd got to work and created Hive:

    “Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.”

Hive enables the best of both worlds: database personnel familiar with SQL commands can manipulate the data, and developers familiar with the schema on read process are still able to create customized queries.

Apace Hive is a data warehouse system that is often used with an open-source analytics platform called Hadoop. Hadoop has become a popular way to aggregate and refine data for businesses. Hadoop users may use tools like Apache Spark or MapReduce to compile data in precise ways before storing it in a file handling system called HDFS. From there, the data can go into Apache Hive for central storage.

Techopedia explains Apache Hive

Apache Hive and other data warehouse designs are the central repositories for data and play important roles in a company’s IT setup. They need to have specific goals for data retrieval, security and more.

Apache Hive has a language called HiveQL, which shares some features with the commonly popular SQL language for data retrieval. It also supports metadata storage in an associated database.

Apache Spark is an open-source program used for data analytics. It’s part of a greater set of tools, including Apache Hadoop and other open-source resources for today’s analytics community.

Experts describe this relatively new open-source software as a data analytics cluster computing tool. It can be used with the Hadoop Distributed File System (HDFS), which is a particular Hadoop component that facilitates complicated file handling.

Some IT pros describe the use of Apache Spark as a potential substitute for the Apache Hadoop MapReduce component. MapReduce is also a clustering tool that helps developers process large sets of data. Those who understand the design of Apache Spark point out that it can be many times faster than MapReduce, in some situations.

Those reporting on the modern use of Apache Spark show that companies are using it in various ways. One common use is for aggregating data and structuring it in more refined ways. Apache Spark can also be helpful with analytics machine-learning work or data classification.

Typically, organizations face the challenge of refining data in an efficient and somewhat automated way, where Apache Spark may be used for these kinds of tasks. Some also imply that using Spark can help provide access to those who are less knowledgeable about programming and want to get involved in analytics handling.

Apache Spark includes APIs for Python and related software languages.

Apache HBase is a specific kind of database tool written in Java and used with elements of the Apache software foundation’s Hadoop suite of big data analysis tools. Apache HBase is an open source product, like other elements of Apache Hadoop. It represents one of several database tools for the input and output of large data sets that are crunched by Hadoop and its various utilities and resources.

Apache HBase is a distributed non-relational database, which means that it doesn’t store information in the same way as a traditional relatable database setup. Developers and engineers run data from Apache HBase to and from Hadoop tools like MapReduce for data analysis. The Apache community promotes Apache HBase as a way to get direct access to big data sets. Experts point out that HBase is based on something called Google BigTable, a distributed storage system.

Some of the popular features of Apache HBase include some kinds of backup and failover support, as well as APIs for popular programming languages. Its compatibility with the greater Hadoop system makes it a candidate for many kinds of big data management problems in enterprise

What kind of data does Hadoop analyze?

Web analytics is the first thing that comes to mind, analyzing Web logs and Web traffic in order to optimize websites. Facebook, for example, is definitely into Web analytics, using Hadoop to sort through the terabytes of data the company accumulates.

Companies use Hadoop clusters to perform risk analysis, fraud detection and customer-base segmentation. Utility companies use Hadoop to analyze sensor data from their electrical grid, allowing them to optimize the production of electricity. An major companies such as Target, 3M and Medtronics use Hadoop to optimize product distribution, business risk assessments and customer-base segmentation.

Universities are invested in Hadoop too. Brad Rubin, an associate professor at the University of St. Thomas Graduate Programs in Software, mentioned that his Hadoop expertise is helping sort through the copious amounts of data compiled by research groups at the university.

Can you give a real-world example of Hadoop?

One of the better-known examples is the TimesMachine. The New York Times has a collection of full-page newspaper TIFF images, associated metadata, and article text from 1851 through 1922 amounting to terabytes of data. NYT’s Derek Gottfrid, using anEC2/S3/Hadoop system and specialized code,:

    “Ingested 405,000 very large TIFF images, 3.3 million articles in SGML and 405,000 xml files mapping articles to rectangular regions in the TIFFs. This data was converted to a more web-friendly 810,000 PNG images (thumbnails and full images) and 405,000 JavaScript files.”

Using servers in the Amazon Web Services cloud, Gottfrid mentioned they were able to process all the data required for the TimesMachine in less than 36 hours.

Is Hadoop already obsolete or just morphing?

Hadoop has been around for over a decade now. That has many saying it’s obsolete. One expert, Dr. David Rico, has said that “IT products are short-lived. In dog years, Google’s products are about 70, while Hadoop is 56.”

There may be some truth to what Rico says. It appears that Hadoop is going through a major overhaul. To learn more about it, Rubin invited researchers to a Twin Cities Hadoop User Group meeting, and the topic of discussion was Introduction to YARN:

      “Apache Hadoop 2 includes a new MapReduce engine, which has a number of advantages over the previous implementation, including better scalability and resource utilization. The new implementation is built on a general resource management system for running distributed applications called

YARN

      Hadoop gets a lot of buzz in database and content management circles, but there are still many questions around it and how it can best be used.
Apache Slider is a new code base for the Hadoop data analytics tool set or ‘suite’ licensed by the Apache software foundation. This project should be released in the second half of 2014 and will help users to apply Hadoop and the YARN resource management tool to various goals and objectives.

Techopedia explains Apache Slider

Experts explain that Apache Slider will help to extend the reach of what Hadoop and YARN can do by allowing certain kinds of databases to run unmodified in the YARN resource management environment.
YARN is an existing Hadoop resource that focuses on resource management and complements other tools like MapReduce and the Hadoop HDFS file handling system. Apache Slider will make more different types of programs compatible with YARN and extend the ‘case uses’ that are possible.
Instead of modifying existing applications, say experts, Apache Slider will allow for a much broader and diversified application of database and data analytics platforms to Hadoop’s core software resources. Using Apache slider may also improve the efficiency of memory and processing resources for an entire project.
Another way to explain the use of Apache Slider and its development is that it can help YARN to eventually become the central software or “operating system” for a corporate data warehouse or other data center. For instance, tools like Apache HBase and Hive are often used in enterprise environments. Making these more compatible with Hadoop YARN can have some real impact on business process efficiency.
DWH and Hadoop
Big data analytics, advanced analytics (i.e., data mining, statistical analysis, complex SQL, and natural language processing), and discovery analytics benefit from Hadoop. HDFS and other Hadoop tools promise to extend and improve some areas within data warehouse architectures:
several DW teams that have consolidated and migrated their staging area(s) onto HDFS to take advantage of its low cost, linear scalability, facility with file-based data, and ability to manage unstructured data. Users who prefer to hand-code most of their ETL solutions will most likely feel at home in code-intense environments such as Apache MapReduce, Pig, and Hive.
They may even be able to refactor existing code to run there. For users who prefer to build their ETL solutions atop a vendor tool, the community of vendors for ETL and other data management tools is rolling out new interfaces and functions for the entire Hadoop product family.
Data archiving. When organizations embrace forms of advanced analytics that require detailed source data, they amass large volumes and retain most of the data over time, which taxes areas of the DW architecture where source data is stored. Storing terabytes of source data in the core EDW’s RDBMS can be prohibitively expensive, which is why many organizations have moved such data to less expensive satellite systems within their extended DW environments.
Similar to migrating staging areas to HDFS, some organizations are migrating their stores of source data and other archives to HDFS. This lowers the cost of archives and analytics while providing greater capacity.
Multi-structured data. : Relatively few organizations are currently getting BI value from semi- and unstructured data, despite years of wishing for it. HDFS can be a special place within your DW environment for managing and processing semi-structured and unstructured data. Hadoop users are finding this approach more successful than stretching an RDBMS-based DW platform to handle data types it was not designed for.
One of Hadoop’s strongest complements to a DW is its handling of semi- and unstructured data, but don’t go thinking that Hadoop is only for unstructured data: HDFS handles the full range of data, including structured forms. In fact, Hadoop can manage and process just about any data you can store in a file and copy into HDFS.
Processing flexibility. Given its ability to manage diverse multi-structured data, as just described, Hadoop’s NoSQL approach is a natural framework for manipulating nontraditional data types. Note that these data types are often free of schema or metadata, which makes them challenging for most vendor brands of SQL-based RDBMSs, although a few have functions for deducing, creating, and applying schema as needed. Hadoop supports a variety of programming languages (Java, R, C), thus providing more capabilities than SQL alone can offer. Again, a few RDBMSs support these same languages as a complement to SQL.
In addition, Hadoop enables the growing practice of “late binding.” With ETL for data warehousing, data is processed, standardized, aggregated, and remodeled before entering the data warehouse environment; this imposes an a priori structure on the data, which is appropriate for known reports, but limits the scope of analytic repurposing later. Data entering HDFS is typically processed lightly or not at all to avoid limiting its future applications. Instead, Hadoop data is processed and restructured at run time, so it can flexibly enable the open-ended data exploration and discovery analytics that many users are looking for today.
Hadoop and RDBMSs are complementary and should be used together
Hadoop’s help for data warehouse environments is limited to a few areas. Luckily, most of
Hadoop’s strengths are in areas where most warehouses and BI technology stacks are weak, such as unstructured data, very large data sets, non-SQL algorithmic analytics, and the flood of files that is drowning many DW environments. Conversely, Hadoop’s limitations are mostly met by mature functionality available today from a wide range of RDBMS types (OLTP databases, columnar databases, DW appliances, etc.), plus administrative tools. In that context, Hadoop and the average RDBMS-based data warehouse are complementary (despite some overlap), which results in a fortuitous synergy when the two are integrated.
The trick, of course, is making HDFS and an RDBMS work together optimally. To that end, one of the critical success factors for assimilating Hadoop into evolving data warehouse architectures is the improvement of interfaces and interoperability between HDFS and RDBMSs. Luckily, this is well under way due to efforts from software vendors and the open source community. Technical users are starting to leverage HDFS/RDBMS integration.
For example, an emerging best practice among DW professionals with Hadoop experience is to manage diverse big data in HDFS, but process it and move the results (via ETL or other data integration media) to RDBMSs (elsewhere in the DW architecture), which are more conducive to SQL-based analytics. Hence, HDFS serves as a massive data staging area and archive.
A similar best practice is to use an RDBMS as a front end to HDFS data; this way, data is moved via distributed queries (whether ad hoc or standardized), not via ETL jobs. HDFS serves as a large, diverse operational data store, whereas the RDBMS serves as a user-friendly semantic layer that makes HDFS data look relational.
Actian Corporation has accumulated a fairly comprehensive portfolio of platforms and tools for managing analytics, big data, and all other enterprise data, encompassing the full range of structured, semi-structured, and unstructured data and content types. The new Actian Analytics Platform includes connectivity to more than 200 sources, a visual framework that simplifies ETL and data science, high-performance analytic engines, and libraries of analytic functions.
Actian
The Actian Analytics Platform centers on Matrix (a massively parallel columnar RDBMS formerly called ParAccel) and Vector (a single-node RDBMS optimized for BI). Actian DataFlow accelerates ETL natively on Hadoop. Actian Analytics includes more than 500 analytic functions ready to run in-database or on Hadoop. Actian DataConnect connects and enriches data from over 200 sources on-premises or in the cloud. The Actian platform is integrated by a modular framework that enables users to quickly connect to all data assets for open-ended analytics with linear scalability.
Strategic partnerships include Hortonworks (for HDFS and YARN), Attivio (for big content), and a number of contributors to the Actian Analytics library.
Cloudera is a leading provider of Apache Hadoop–based software, services, and training, enabling Cloudera data-driven organizations to derive business value from all their data while simultaneously reducing  the costs of data management. CDH (Cloudera’s distribution including Apache Hadoop) is a  comprehensive, tested, and stable distribution of Hadoop that is widely deployed in commercial and  non-commercial environments. Organizations can subscribe to Cloudera Enterprise—comprising  CDH, Cloudera Support, and the Cloudera Manager—to simplify and reduce the cost of Hadoop configuration, rollout, upgrades, and administration. Cloudera also provides Cloudera Enterprise  Real-Time Query (RTQ), powered by Cloudera Impala, the first low-latency SQL query engine that  runs directly over data in HDFS and HBase. Cloudera Search increases data ROI by offering non- technical resources a common and everyday method for accessing and querying large, disparate big  data stores of mixed format and structure managed in Hadoop. As a major contributor to the Apache  open source community, with customers in every industry, and a massive partner program,  Cloudera’s big data expertise is profound.
Datawatch Corporation provides a visual data discovery and analytics solution that optimizes any data—regardless of its variety, volume, or velocity—to reveal valuable insights for improving business  decisions. Datawatch has a unique ability to integrate structured, unstructured, and semi-structured  sources—such as reports, PDF files, print spools, and EDI streams—with real-time data streams  from CEP engines, tick feeds, or machinery and sensors into visually rich analytic applications,  which enable users to dynamically discover key factors about any operational aspect of their business.
Datawatch steps users through data access, exploration, discovery, analysis, and delivery, all in a  unified and easy-to-use tool called Visual Data Discovery, which integrates with existing BI and big  data platforms. IT’s involvement is minimal in that IT sets up data connectivity; most users can  create their own reports and analyses, then publish them for colleagues to share in a self-service  fashion. The solution is suitable for a single analyst, a department, or an enterprise. Regardless of user  type, whether business or technical or both, all benefit from the high ease of use, productivity, and  speed to insight that Datawatch’s real-time data visualization delivers.
Dell Software For years, Dell Software has been acquiring and building software tools (plus partnering with leading vendors for more tools) with the goal of assembling a comprehensive portfolio of IT administration tools for securing and managing networks, applications, systems, endpoints, devices, and data. Within that portfolio, Dell Software now offers a range of tools specifically for data management, with a focus on big data and analytics. For example, Toad Data Point provides interfaces and administrative functions for most traditional databases and packaged applications, plus new big data platforms such as Hadoop, MongoDB, Cassandra, SimpleDB, and Azure. Spotlight is a DBA tool for monitoring DBMS health and benchmarking. Shareplex supports Oracle-to-Oracle replication today, and will soon support Hadoop. Kitenga Big Data Analytics enables rapid
transformation of diverse unstructured data into actionable insights. Boomi MDM launched in 2013. The new Toad BI Suite pulls these tools together to span the entire information life cycle of big data and analytics. After all, the goal of Dell Software is: one vendor, one tool chain, all data.
HP HP Vertica provides solutions to big data challenges. The HP Vertica Analytics Platform was purpose-built for advanced analytics against big data. It consists of a massively parallel database with columnar support, plus an extensible analytics framework optimized for the real-time analysis of data. It is known for high performance with very complex analytic queries against multi-terabyte data sets.
Vertica offers advantages over SQL-on-Hadoop analytics, shortening some queries from days to minutes. Although SQL is the primary query language, Vertica also supports Java, R, and C.
Furthermore, the HP Vertica Flex Zone feature enables users to define and apply schema during query and analysis, thereby avoiding the need to prepocess data or deploy Hadoop or NoSQL platforms for schema-free data.
HP Vertica is part of HP’s new HAVEn platform, which integrates multiple products and services into a comprehensive big data platform that provides end-to-end information management for a wide range of structured and unstructured data domains. To simplify and accelerate the deployment of an analytic solution, HP offers the HP ConvergedSystem 300 for Vertica—a pre-built and pre-tested turn-key appliance.
MapR
MapR provides a complete distribution for Apache Hadoop, which is deployed at thousands of organizations globally for production, data-driven applications. MapR focuses on extending and advancing Hadoop, MapReduce, and NoSQL products and technologies to make them more feature rich, user friendly, dependable, and conducive to production IT environments. For example, MapR is spearheading the development of Apache Drill, which will bring ANSI SQL capabilities to Hadoop in the form of low-latency, interactive query capabilities for both structured and schema-free, nested data. As other examples, MapR is the first Hadoop distribution to integrate enterprise-grade search;
MapR enables flexible security via support for Kerberos and native authentication; and MapR provides a plug-and-play architecture for integrating real-time stream computational engines such as Storm with Hadoop. For greater high availability, MapR provides snapshots for point-in-time data rollback and a No NameNode architecture that avoids single points of failure within the system and ensures there are no bottlenecks to cluster scalability. In addition, it’s fast; MapR set the Terasort, MinuteSort, and YCSB world records.

ITIL Implementation TIps

It is the framework which changes with each new technology and not just the picture within the frame. –Marshall McLuhan

The Information Technology Infrastructure Library (ITIL) is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business. In its current form (known as ITILv3 and ITIL 2011 edition), ITIL is published in a series of five core publications, each of which covers an ITSM lifecycle stage. ITILv3 underpins ISO/IEC 20000 (previously BS15000), the International Service Management Standard for IT service management, although differences between the two frameworks do exist. ITIL describes procedures, tasks and checklists that are not organization-specific, used by an organization for establishing a minimum level of competency. It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement. The names ITIL and IT Infrastructure Library are registered trademarks of the United Kingdom’s Office of Government Commerce (OGC) – now part of the Cabinet Office.

Following this move, the ownership is now listed as being with HM Government rather than OGC. ITIL v3 is an extension of ITIL v2 and fully replaced it following the completion of the withdrawal period on 30 June 2011.

ITIL v3 provides a more holistic perspective on the full life cycle of services, covering the entire IT organisation and all supporting components needed to deliver services to the customer, whereas v2 focused on specific activities directly related to service delivery and support. Most of the v2 activities remained untouched in v3, but some significant changes in terminology were introduced in order to facilitate the expansion.

All companies are quite different and CIOs may also have different understandings and experience of ITIL. Some think ITIL provides a tremendous amount of benefits to many global companies; while there are also many companies fail at its use and others using it as an excuse to slow down the speed of business. Is it one of those “old school” frameworks from the era when IT focused on risk mitigation and process integrity rather than customer satisfaction and business success? or does ITIL still add value in ITSM at digital speed? Some CIOs are abandoning ITIL, while others use it religiously. Is it still appropriate and why?

1. COMMON UNDERSTANDING OF ITIL IS VITAL TO ITS VALUE PROPOSITION IN ITSM 1) ITSL is a framework, not gospel. The elasticity and resiliency of any frame works starts with an understanding that we are trying to provide a foundation for continued success . . . the goal should not be the construction of a monolithic standard that is incapable of adapting to the changing needs.

ITIL is organized around a Service Lifecycle: which includes: Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement. The lifecycle starts with Service Strategy – understanding who the IT customers are, the service offerings that are required to meet the customers’ needs, the IT capabilities and resource that are required to develop these offerings and the requirements for executing successfully. Driven through strategy and throughout the course of delivery and support of the service, IT must always try to assure that cost of delivery is consistent with the value delivered to the customer. Service Design assures that new and changes services are designed effectively to meet customer expectations. The technology and architecture required to meet customer needs cost effectively is an integral part of Service Design. Additionally, processes required to manage services are also part of the design phase. Service management systems and tools that are necessary to adequately monitor and support new or modified services must be considered as well as mechanisms for measuring service levels, technology and process efficiency and effectiveness. Through the Service Transition phase of the lifecycle the design is built, tested and moved into production to assure that the business customer can achieve the desired value. This phase addresses managing changes, controlling the assets and configuration items (underlying components – hardware, software, etc) associated with new and changed systems, service validation and testing and transition planning to assure that users, support personnel and the production environment has been prepared for the release to production. Once transitioned, Service Operation then delivers the service on an ongoing basis, overseeing the daily overall health of the service. This includes managing disruptions to service through rapid restoration of incidents, determining the root cause of problems and detecting trends associated with recurring issues, handling daily routine end user requests and managing service access. Enveloping the Service Lifecycle is Continual Service Improvement (CSI). CSI offers a mechanism for IT to measure and improve the service levels, the technology and the efficiency and effectiveness or processes used in the overall management of services.

2) ITIL is Recipe: Don’t eat the recipe; eat what you make from it! ITIL doesn’t give you all the answers for one thing. It’s more a book of recopies than the finished article. It was intentionally designed to be a guideline and not the gospel. As such, it is expected to be tailored to meet the requirements of the organization. 3) ITIL is basically a detailed analysis of all the aspects of operations and recommendations for best practice. However, you can’t just implement ITIL as written; you have to use it as a guide for the development of operational procedures that suit your own operations. ITIL clearly doesn’t develop and adapt as quickly as some organizations change and therefore, operational managers have to use their brains to adapt to satisfy the needs of the organization in which they work. 4) ITIL is a set of best practices and a framework, and Best Practice is not a one-off implementation, nor is it self-sustaining. As Version 3 of ITIL underlines, there should be an iterative and interactive lifecycle approach to the various processes. Best Practice is an ongoing commitment, and not a time-restricted project. 5) ITIL is a guideline – not a standard. Weaving it into the fabric of compliance as a standard will continue to cause heartburn. The more we change, the more we often stay the same . . . in so many respects.

2. TOP TEN REASONS WHY ITIL FAILS OR SOME MOVE AWAY FROM IT

1) The #1 reason for anyone to move away from it, seems to be lack of flexibility and the CIO’s misconception that it adds more time to implementations, modernizations, and transformations

2) ITIL is not to blame. The implementation of ITIL is to blame. To be efficient, ITIL should never be a burden to the operational staff, but a toolbox to work efficiently. The administrative burden should be taken by the support system. ITIL is to frequently hijacked by administrative forces and turned into a nightmare of controlling layers

3) It takes too long for ITIL to keep up with trends and new technologies requiring different models, such as Cloud and other new architectures. They also feel it has required them to spend too much time on operational aspects.

4) Change Management Fails: The biggest failure in many organizations and their implementation of ITIL or other methodology is their strict adherence to the methodology without any consideration for adapting the methodology to their culture, business, technical infrastructure, operations, or even the circumstances of a given project.

5) Too Much IT Focus, not Enough Business Focus: TIL is still relevant, but sometimes organizations spend so long focusing on implementing the processes that they forget about basics – focusing on discovering what is the cause of the problem and constant improvement.

6) Some organizations treat ITIL as an end in itself rather than a tool to help IT efficiently and effectively deliver the services the organization needs to achieve its overall goals. It is also essential to take into account the skills and experience of the staff that will operate the process when designing it so that it doesn’t become overly prescriptive and takes advantage of their professional expertise. ITIL can help you get there, but it doesn’t have to be the end all. 100% adherence to any methodology is not necessarily a good thing.

7) Misunderstand that it is not mandatory in its entirety and that it is one of several tools and guidelines they can use. There is no reason why you can’t take the best of ITIL, the parts that work well in your company culture, and tailor the rest. Infrastructure and operations benefit greatly from well-designed, air-tight processes that can be automated. The goal should be to right-size ITIL for your organization without breaking the bank.

8) People take “it” too seriously. The key is to look for improvement opportunities to solve problems or increase value, not to simply pass some process audit and sending people on training is never the silver bullet. Otherwise ITIL just becomes the flavor of the day until the next fad comes along. Or when you start to expect it to be an all encompassing solution for IT is when you start to get into trouble. This is where you need to start to embrace other frameworks and even bring in your own creativity to be successful in the delivery of IT services.

9) Some believe ITIL is still relevant but it is costly, and that may explain why some are abandoning it. Efficiency should not come at all cost. The reason for failure is a mismatch of expectations and failure to deliver on what was perceived to be the outcome.

10) ITIL turns to be an inflexible doctrine that drags down the enterprise. Failed ITIL initiatives lies not with the service lifecycle management framework, but rather with the application of that framework. Fundamental, conceptual understanding of continuous improvement is lacking from many implementations.

3. DEFINE THE RIGHT SET OF QUESTIONS TO EVALUATE ITIL OBJECTIVELY ITIL gains some reputation, also cause confusions or even resource waste, if any comprehensive surveys are taken to ask ITIL users, what is the right set of questions shall you ask:

1) IT Maturity: on average, do ITIL users have significant higher IT maturity, or not so much difference?

2) Innovation: What matters now, innovation, most of businesses now also think IT as their innovation engine, so, do ITIL users have better capabilities to be innovate or less? Why.

3) Value: What are the key values it can bring to IT or business as a whole? How about value/cost ratio? How about User feedback and overall customer experience? How about short term win vs. Long term Perspective?

4) Agile: Is Agile complimentary to ITIL? Or does ITIL become the barrier for company to adopt Agile. Although Agile came out of the software development world, can things like kanban and scrum be used effectively by infrastructure and support teams?

5) Change: Can ITIL adapt to change? Is ITIL still an effective framework to embrace IT/Business Changes with right governance discipline? Or is ITIL an “old school framework” to be very rigid applying controls or stifle changes?

6) Simplicity: Does ITIL add the un-necessary restrictions on users/systems? Or It has the necessary design complexity to enforce service delivery?

7) Digitalization: Can ITIL framework help build business’s digital capabilities/maturity such as: business/IT integration, tailored solution, or a unified digital platform?

4. ITIL TIPS FOR CIOS

IT Service Management (ITSM) derives enormous benefits from a best practice approach. Because ITSM is driven both by technology and the huge range of organizational environments in which it operates, it is in a state of constant evolution. Best practice, based on expert advice and input from ITIL users is both current and practical, combining the latest thinking with sound, common sense guidance.

ITIL is not one Size fits All: TIL and other processes, can only work if tailored specifically to the environment a CIO finds him/herself in. What works for one organization may not work for another, even if implemented by the best ITIL practitioner in the business; and, sometimes the CIO may rightly take the decision that a bespoke process is what’s needed rather than a widely adopted one such as ITIL.

Cloud Transformation: Which role ITIL can play in such transformation? With more and more companies adopting cloud, the opportunity has never been greater for IT to transform into a service-oriented organization and grow the business it serves. According to IDG research, more than one third of current IT budgets are allocated to cloud solutions. However, in their haste to adopt the cloud, CIOs may be missing an opportunity: the chance to use this transition to reshape IT.

Key to success is IT transformation to services broker. With a service lifecycle approach, organizations can increase the velocity of IT service delivery and operate efficiently, without sacrificing governance. CIO must see what they can get out of ITIL and at the same time what is the best for the organization to adopt. No one is forcing anyone rather it is just a tool which help you to be more vigilant and smart. CIOs must see the ROI using this tool for business in terms of value addition, controls, business benefits etc.

BUILDING TRUST THROUGH TRANSPARENCY: In many organizations, IT needs to gain the trust of the business. Research to measure business perception of IT across many companies clearly demonstrates that, while IT is seen as an important partner, it receives low ratings in areas such as budget effectiveness, business understanding, and communication, any framework should enforce such transparency. CIOs should have in-depth understanding of ITIL at strategic Level: most CIOs, including those who actively champion ITSM, have little more than superficial understanding of the ITIL, or the implications of adopting ITSM processes. Worse, they rarely regard the effort as a true organizational transformation effort touching every aspect of the IT organization, and many aspects of the enterprise organization. Be pragmatic not dogmatic. An organization has to balance the time it spends on process (ITIL) and the time it spends on products/deliverables. If the ITIL implementation became such a focus that the organization loses traction on deliverables, then it a re-balancing would be in order. Embrace Agile: Agile Scrum and IT management, many organizations use agile as mainstream software development methodology, and even as management discipline, that said, what is needed from effective framework is the governance process also being agile enough to adapt to changes Social Collaboration: The emerging ITSM solutions may add social collaboration in service management to build up a better democratic environment, such as DevOps to converge IT development & operation for improving agility, the CIO’s evaluation for new tools may also include how the framework support the new trend and deliver innovative IT services & solutions.

Value Driven Questions being asked by CIOs: ‘how much of this particular process or method should I implement in this role to get the business to where it needs to be?’. The answer to that question should never be based on the technology in use in the business, rather on the particular needs of the business – including taking into account where it currently sits with regards to the good practices proposed by ITIL and other methods out there.

As a reference framework, ITIL is not a “one size fits all” solution. CIOs should be innovators, not lemmings. Use what makes sense, apply it in a way that considers what’s unique about your organization but without abandoning the spirit of the framework. IT becomes business catalyst to build competitive uniqueness, how do you differentiate yourself from other IT organizations, besides standardization, there’re optimization and innovation, IT is shaping your business, but framework is not strategy. Do not let ITIL or any other framework ruin your common sense. Take it as a guideline but put your own flavors and ingredients. Select a mix of framework, toolset and process architectures to improve flexibility and agility for speed of business change, doing better with less, and doing more with innovation.

.net

The .NET Framework is a technology that supports building and running the next generation of applications and XML Web services. The .NET Framework is designed to fulfill the following objectives:

  • To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.
  • To provide a code-execution environment that minimizes software deployment and versioning conflicts.
  • To provide a code-execution environment that promotes safe execution of code, including code created by an unknown or semi-trusted third party.
  • To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.
  • To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.
  • To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.
The .NET Framework consists of the common language runtime and the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable ASP.NET applications and XML Web services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and isolated file storage.

The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture.

.NET Framework in context

Managed code within a larger architectureThe following sections describe the main features of the .NET Framework in greater detail.

The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.

With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich.

The runtime also enforces code robustness by implementing a strict type-and-code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.

In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.

The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance.

Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft SQL Server and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry’s best enterprise servers that support runtime hosting.

The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework.

For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework.

As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to these common tasks, the class library includes types that support a variety of specialized development scenarios. For example, you can use the .NET Framework to develop the following types of applications and services:

For example, the Windows Forms classes are a comprehensive set of reusable types that vastly simplify Windows GUI development. If you write an ASP.NET Web Form application, you can use the Web Forms classes.

Lean Software Development

Lean Software Development

 David J. Anderson is the author of three books, Lessons in Agile Management: On the Road to Kanban, which was published in 2012, Kanban: Successful Evolutionary Change for your Technology Business, [1] which was published in 2010, and , Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, [2] which was published in 2003. He was a member of the team that created the Agile software development method, Feature-Driven Development, in Singapore between 1997 and 1999. He created MSF for CMMI Process Improvement, and he co-authored the Technical Note from the Software Engineering Institute, “CMMI and Agile: Why Not Embrace Both!” He was a founder of the Lean Systems Society (http://www.leansystemssociety.org). He is CEO of Lean-Kanban University Inc., an accredited training and quality standards organization offering Kanban training through a network of partners throughout the world and he leads an international management training and consulting firm, David J. Anderson & Associates Inc. (http://www.agilemanagement.net) that helps technology businesses improve their performance through better management policies and decision making.

The term Lean Software Development was first coined as the title for a conference organized by the ESPRIT initiative of the European Union, in Stuttgart Germany, October 1992. Independently, the following year, Robert “Bob” Charette in 1993 suggested the concept of “Lean Software Development” as part of his work exploring better ways of managing risk in software projects. The term “Lean” dates to 1991, suggested by James Womack, Daniel Jones, and Daniel Roos, in their book The Machine That Changed the World: The Story of Lean Production[3] as the English language term to describe the management approach used at Toyota. The idea that Lean might be applicable in software development was established very early, only 1 to 2 years after the term was first used in association with trends in manufacturing processes and industrial engineering.

In their 2nd book, published in 1995, Womack and Jones[4] defined five core pillars of Lean Thinking. These were:

  • Value
  • Value Stream
  • Flow
  • Pull
  • Perfection

This became the default working definition for Lean over most of the next decade. The pursuit of perfection, it was suggested, was achieved by eliminating waste. While there were 5 pillars, it was the 5th one, pursuit of perfection through the systemic identification of wasteful activities and their elimination, that really resonated with a wide audience. Lean became almost exclusively associated with the practice of elimination of waste through the late 1990s and the early part of the 21st Century.

The Womack and Jones definition for Lean is not shared universally. The principles of management at Toyota are far more subtle. The single word “waste” in English is described more richly with three Japanese terms:

  • Muda – literally meaning “waste” but implying non-value-added activity
  • Mura – meaning “unevenness” and interpreted as “variability in flow”
  • Muri – meaning “overburdening” or “unreasonableness”

Perfection is pursued through the reduction of non-value-added activity but also through the smoothing of flow and the elimination of overburdening. In addition, the Toyota approach was based in a foundational respect for people and heavily influenced by the teachings of 20th century quality assurance and statistical process control experts such as W. Edwards Deming.

Unfortunately, there are almost as many definitions for Lean as there are authors on the subject.

Bob Charette was invited but unable to attend the 2001 meeting at Snowbird, Utah, where the Manifesto for Agile Software Development was authored. Despite missing this historic meeting, Lean Software Development was considered as one of several Agile approaches to software development. Jim Highsmith dedicated a chapter of his 2002 book to an interview with Bob about the topic. Later, Mary & Tom Poppendieck went on to author a series of 3 books. During the first few years of the 21st Century, Lean principles were used to explain why Agile methods were better. Lean explained that Agile methods contained little “waste” and hence produced a better economic outcome. Lean principles were used as a “permission giver” to adopt Agile methods.
In recent years, Lean Software Development has really emerged as its own discipline related to, but not specifically a subset of the Agile movement. This evolution started with the synthesis of ideas from Lean Product Development and the work of Donald G. Reinertsen and ideas emerging from the non-Agile world of large scale system engineering and the writing of James Sutton and Peter Middleton[12].  They also synthesized the work of Eli Goldratt and W. Edwards Deming and developed a focus on flow rather than waste reduction . At the behest of Reinertsen around 2005, They introduced the use of kanban systems that limit work-in-progress and “pull” new work only when the system is ready to process it. Alan Shalloway added his thoughts on Lean software development in his 2009 book on the topic. Since 2007, the emergence of Lean as a new force in the progress of the software development profession has been focused on improving flow, managing risk, and improving (management) decision making. Kanban has become a major enabler for Lean initiatives in IT-related work. It appears that a focus on flow, rather than a focus on waste elimination, is proving a better catalyst for continuous improvement within knowledge work activities such as software development.
Defining Lean Software Development is challenging because there is no specific Lean Software Development method or process. Lean is not an equivalent of Personal Software Process, V-Model, Spiral Model, EVO, Feature-Driven Development, Extreme Programming, Scrum, or Test-Driven Development. A software development lifecycle process or a project management process could be said to be “lean” if it was observed to be aligned with the values of the Lean Software Development movement and the principles of Lean Software Development. So those anticipating a simple recipe that can be followed and named Lean Software Development will be disappointed. You must fashion or tailor your own software development process by understanding Lean principles and adopting the core values of Lean.There are several schools of thought within Lean Software Development. The largest, and arguably leading, school is the Lean Systems Society, which includes Donald Reinertsen, Jim Sutton, Alan Shalloway, Bob Charette, Mary Poppendeick, and David J. Anderson. Mary and Tom Poppendieck’s work developed prior to the formation of the Society and its credo stands separately, as does the work of Craig Larman, Bas Vodde[15,16], and, most recently, Jim Coplien[17]. This article seeks to be broadly representative of the Lean Systems Society viewpoint as expressed in its credo and to provide a synthesis and summary of their ideas.
The Lean Systems Society published its credo at the 2012 Lean Software & Systems Conference . This was based on a set of values published a year earlier. Those values include:

  • Accept the human condition
  • Accept that complexity & uncertainty are natural to knowledge work
  • Work towards a better Economic Outcome
  • While enabling a better Sociological Outcome
  • Seek, embrace & question ideas from a wide range of disciplines
  • A values-based community enhances the speed & depth of positive change
Knowledge work such as software development is undertaken by human beings. We humans are inherently complex and, while logical thinkers, we are also led by our emotions and some inherent animalistic traits that can’t reasonably be overcome. Our psychology and neuro-psychology must be taken into account when designing systems or processes within which we work. Our social behavior must also be accommodated. Humans are inherently emotional, social, and tribal, and our behavior changes with fatigue and stress. Successful processes will be those that embrace and accommodate the human condition rather than those that try to deny it and assume logical, machine-like behavior.
The behavior of customers and markets are unpredictable. The flow of work through a process and a collection of workers is unpredictable. Defects and required rework are unpredictable. There is inherent chance or seemingly random behavior at many levels within software development. The purpose, goals, and scope of projects tend to change while they are being delivered. Some of this uncertainty and variability, though initially unknown, is knowable in the sense that it can be studied and quantified and its risks managed, but some variability is unknowable in advance and cannot be adequately anticipated. As a result, systems of Lean Software Development must be able to react to unfolding events, and the system must be able to adapt to changing circumstances. Hence any Lean Software Development process must exist within a framework that permits adaptation (of the process) to unfolding events.
Human activities such as Lean Software Development should be focused on producing a better economic outcome. Capitalism is acceptable when it contributes both to the value of the business and the benefit of the customer. Investors and owners of businesses deserve a return on investment. Employees and workers deserve a fair rate of pay for a fair effort in performing the work. Customers deserve a good product or service that delivers on its promised benefits in exchange for a fair price paid. Better economic outcomes will involve delivery of more value to the customer, at lower cost, while managing the capital deployed by the investors or owners in the most effective way possible.
Better economic outcomes should not be delivered at the expense of those performing the work. Creating a workplace that respects people by accepting the human condition and provides systems of work that respect the psychological and sociological nature of people is essential. Creating a great place to do great work is a core value of the Lean Software Development community.
The Lean Software & Systems community seems to agree on a few principles that underpin Lean Software Development processes.

  • Follow a Systems Thinking & Design Approach
  • Emergent Outcomes can be Influenced by Architecting the Context of a Complex Adaptive System
  • Respect People (as part of the system)
  • Use the Scientific Method (to drive improvements)
  • Encourage Leadership
  • Generate Visibility (into work, workflow, and system operation)
  • Reduce Flow Time
  • Reduce Waste to Improve Efficiency
This is often referred to in Lean literature as “optimize the whole,” which implies that it is the output from the entire system (or process) that we desire to optimize, and we shouldn’t mistakenly optimize parts in the hope that it will magically optimize the whole. Most practitioners believe the corollary to be true, that optimizing parts (local optimization) will lead to a suboptimal outcome.A Lean Systems Thinking and Design Approach requires that we consider the demands on the system made by external stakeholders, such as customers, and the desired outcome required by those stakeholders. We must study the nature of demand and compare it with the capability of our system to deliver. Demand will include so-called “value demand,” for which customers are willing to pay, and “failure demand,” which is typically rework or additional demand caused by a failure in the supply of value demand. Failure demand often takes two forms: rework on previously delivered value demand and additional services or support due to a failure in supplying value demand. In software development, failure demand is typically requests for bug fixes and requests to a customer care or help desk function.A systems design approach requires that we also follow the Plan-Do-Study-Act (PDSA) approach to process design and improvement. W. Edwards Deming used the words “study” and “capability” to imply that we study the natural philosophy of our system’s behavior. This system consists of our software development process and all the people operating it. It will have an observable behavior in terms of lead time, quality, quantity of features or functions delivered (referred to in Agile literature as “velocity”), and so forth. These metrics will exhibit variability and, by studying the mean and spread of variation, we can develop an understanding of our capability. If this is mismatched with the demand and customer expectations, then the system will need to be redesigned to close the gap.Deming also taught that capability is 95% influenced by system design, and only 5% is contributed by the performance of individuals. In other words, we can respect people by not blaming them for a gap in capability compared to demand and by redesigning the system to enable them to be successful.To understand system design, we must have a scientific understanding of the dynamics of system capability and how it might be affected. Models are developed to predict the dynamics of the system. While there are many possible models, several popular ones are in common usage: the understanding of economic costs; so-called transaction and coordination costs that relate to production of customer-valued products or services; the Theory of Constraints – the understanding of bottlenecks; and The Theory of Profound Knowledge – the study and recognition of variability as either common to the system design or special and external to the system design.
Complex systems have starting conditions and simple rules that, when run iteratively, produce an emergent outcome. Emergent outcomes are difficult or impossible to predict given the starting conditions. The computer science experiment “The Game of Life” is an example of a complex system. A complex adaptive system has within it some self-awareness and an internal method of reflection that enables it to consider how well its current set of rules is enabling it to achieve a desired outcome. The complex adaptive system may then choose to adapt itself – to change its simple rules – to close the gap between the current outcome and the desired outcome. The Game of Life adapted such that the rules could be re-written during play would be a complex adaptive system.In software development processes, the “simple rules” of complex adaptive systems are the policies that make up the process definition. The core principle here is based in the belief that developing software products and services is not a deterministic activity, and hence a defined process that cannot adapt itself will not be an adequate response to unforeseeable events. Hence, the process designed as part of our system thinking and design approach must be adaptable. It adapts through the modification of the policies of which it is made.The Kanban approach to Lean Software Development utilizes this concept by treating the policies of the kanban pull system as the “simple rules,” and the starting conditions are that work and workflow is visualized, that flow is managed using an understanding of system dynamics, and that the organization uses a scientific approach to understanding, proposing, and implementing process improvements.
The Lean community adopts Peter Drucker’s definition of knowledge work that states that workers are knowledge workers if they are more knowledgeable about the work they perform than their bosses. This creates the implication that workers are best placed to make decisions about how to perform work and how to modify processes to improve how work is performed. So the voice of the worker should be respected. Workers should be empowered to self-organize to complete work and achieve desired outcomes. They should also be empowered to suggest and implement process improvement opportunities or “kaizen events” as they are referred to in Lean literature. Making process policies explicit so that workers are aware of the rules that constrain them is another way of respecting them. Clearly defined rules encourage self-organization by removing fear and the need for courage. Respecting people by empowering them and giving them a set of explicitly declared policies holds true with the core value of respecting the human condition.
Seek to use models to understand the dynamics of how work is done and how the system of Lean Software Development is operating. Observe and study the system and its capability, and then develop and apply models for predicting its behavior. Collect quantitative data in your studies, and use that data to understand how the system is performing and to predict how it might change when the process is changed.The Lean Software & Systems community uses statistical methods such as statistical process control charts and spectral analysis histograms of raw data for lead time and velocity to understand system capability. They also use models such as: the Theory of Constraints to understand bottlenecks; The System of Profound Knowledge to understand variation that is internal to the system design versus that which is externally influenced; and an analysis of economic costs in the form of tasks performed to merely coordinate, set up, deliver, or clean up after customer-valued product or services are created. Some other models are coming into use, such as Real Option Theory, which seeks to apply financial option theory from financial risk management to real-world decision making.The scientific method suggests: we study; we postulate an outcome based on a model; we perturb the system based on that prediction; and we observe again to see if the perturbation produced the results the model predicted. If it doesn’t, then we check our data and reconsider whether our model is accurate. Using models to drive process improvements moves it to a scientific activity and elevates it from a superstitious activity based on intuition.
Leadership and management are not the same. Management is the activity of designing processes, creating, modifying, and deleting policy, making strategic and operational decisions, gathering resources, providing finance and facilities, and communicating information about context such as strategy, goals, and desired outcomes. Leadership is about vision, strategy, tactics, courage, innovation, judgment, advocacy, and many more attributes. Leadership can and should come from anyone within an organization. Small acts of leadership from workers will create a cascade of improvements that will deliver the changes needed to create a Lean Software Development process.
Knowledge work is invisible. If you can’t see something, it is (almost) impossible to manage it. It is necessary to generate visibility into the work being undertaken and the flow of that work through a network of individuals, skills, and departments until it is complete. It is necessary to create visibility into the process design by finding ways of visualizing the flow of the process and by making the policies of the process explicit for everyone to see and consider. When all of these things are visible, then the use of the scientific method is possible, and conversations about potential improvements can be collaborative and objective. Collaborative process improvement is almost impossible if work and workflow are invisible and if process policies are not explicit.
The software development profession and the academics who study software engineering have traditionally focused on measuring time spent working on an activity. The Lean Software Development community has discovered that it might be more useful to measure the actual elapsed calendar time something takes to be processed. This is typically referred to as Cycle Time and is usually qualified by the boundaries of the activities performed. For example, Cycle Time through Analysis to Ready for Deployment would measure the total elapsed time for a work item, such as a user story, to be analyzed, designed, developed, tested in several ways, and queued ready for deployment to a production environment.Focusing on the time work takes to flow through the process is important in several ways. Longer cycle times have been shown to correlate with a non-linear growth in bug rates. Hence shorter cycle times lead to higher quality. This is counter-intuitive as it seems ridiculous that bugs could be inserted in code while it is queuing and no human is actually touching it. Traditionally, the software engineering profession and academics who study it have ignored this idle time. However, empirical evidence suggests that cycle time is important to initial quality.Alan Shalloway has also talked about the concept of “induced work.” His observation is that a lag in performing a task can lead to that task taking a lot more effort than it may have done. For example, a bug found and fixed immediately may only take 20 minutes to fix, but if that bug is triaged, is queued and then waits for several days or weeks to be fixed, it may involve several or many hours to make the fix. Hence, the cycle time delay has “induced” additional work. As this work is avoidable, in Lean terms, it must be seen as “waste.”The third reason for focusing on cycle time is a business related reason. Every feature, function, or user story has a value. That value may be uncertain but, nevertheless, there is a value. The value may vary over time. The concept of value varying over time can be expressed economically as a market payoff function. When the market payoff function for a work item is understood, even if the function exhibits a spread of values to model uncertainty, it is possible to evaluate a “cost of delay.” The cost of delay allows us to put a value on reducing cycle time.With some work items, the market payoff function does not start until a known date in the future. For example, a feature designed to be used during the 4th of July holiday in the United States has no value prior to that date. Shortening cycle time and being capable of predicting cycle time with some certainty is still useful in such an example. Ideally, we want to start the work so that the feature is delivered “just in time” when it is needed and not significantly prior to the desired date, nor late, as late delivery incurs a cost of delay. Just-in-time delivery ensures that optimal use was made of available resources. Early delivery implies that we might have worked on something else and have, by implication, incurred an opportunity cost of delay.As a result of these three reasons, Lean Software Development seeks to minimize flow time and to record data that enables predictions about flow time. The objective is to minimize failure demand from bugs, waste from over-burdening due to delay in fixing bugs, and to maximize value delivered by avoiding both cost of delay and opportunity cost of delay.
For every valued-added activity, there are setup, cleanup and delivery activities that are necessary but do not add value in their own right. For example, a project iteration that develops an increment of working software requires planning (a setup activity), an environment and perhaps a code branch in version control (collectively known as configuration management and also a setup activity), a release plan and performing the actual release (a delivery activity), a demonstration to the customer (a delivery activity), and perhaps an environment teardown or reconfiguration (a cleanup activity.) In economic terms, the setup, cleanup, and delivery activities are transaction costs on performing the value-added work. These costs (or overheads) are considered waste in Lean.Any form of communication overhead can be considered waste. Meetings to determine project status and to schedule or assign work to team members would be considered a coordination cost in economic language. All coordination costs are waste in Lean thinking. Lean software development methods seek to eliminate or reduce coordination costs through the use of colocation of team members, short face-to-face meetings such as standups, and visual controls such as card walls.The third common form of waste in Lean Software Development is failure demand. Failure demand is a burden on the system of software development. Failure demand is typically rework or new forms of work generated as a side-effect of poor quality. The most typical forms of failure demand in software development are bugs, production defects, and customer support activities driven out of a failure to use the software as intended. The percentage of work-in-progress that is failure demand is often referred to as Failure Load. The percentage of value-adding work against failure demand is a measure of the efficiency of the system.The percentage of value-added work against the total work, including all the non-value adding transaction and coordination costs, determines the level of efficiency. A system with no transaction and coordination costs and no failure load would be considered 100% efficient.Traditionally, Western management science has taught that efficiency can be improved by increasing the batch size of work. Typically, transaction and coordination costs are fixed or rise only slightly with an increase in batch size. As a result, large batches of work are more efficient. This concept is known as “economy of scale.” However, in knowledge work problems, coordination costs tend to rise non-linearly with batch size, while transaction costs can often exhibit a linear growth. As a result, the traditional 20th Century approach to efficiency is not appropriate for knowledge work problems like software development.It is better to focus on reducing the overheads while keeping batch sizes small in order to improve efficiency. Hence, the Lean way to be efficient is to reduce waste. Lean software development methods focus on fast, cheap, and quick planning methods; low communication overhead; and effective low overhead coordination mechanisms, such as visual controls in kanban systems. They also encourage automated testing and automated deployment to reduce the transaction costs of delivery. Modern tools for minimizing the costs of environment setup and teardown, such as modern version control systems and use of virtualization, also help to improve efficiency of small batches of software development.
Lean Software Development does not prescribe practices. It is more important to demonstrate that actual process definitions are aligned with the principles and values. However, a number of practices are being commonly adopted. This section provides a brief overview of some of these.

Cumulative Flow Diagrams have been a standard part of reporting in Team Foundation Server since 2005. Cumulative flow diagrams plot an area graph of cumulative work items in each state of a workflow. They are rich in information and can be used to derive the mean cycle time between steps in a process as well as the throughput rate (or “velocity”). Different software development lifecycle processes produce different visual signatures on cumulative flow diagrams. Practitioners can learn to recognize patterns of dysfunction in the process displayed in the area graph. A truly Lean process will show evenly distributed areas of color, smoothly rising at a steady pace. The picture will appear smooth without jagged steps or visible blocks of color.In their most basic form, cumulative flow diagrams are used to visualize the quantity of work-in-progress at any given step in the work item lifecycle. This can be used to detect bottlenecks and observe the effects of “mura” (variability in flow).
In addition to the use of cumulative flow diagrams, Lean Software Development teams use physical boards, or projections of electronic visualization systems, to visualize work and observe its flow. Such visualizations help team members observe work-in-progress accumulating and enable them to see bottlenecks and the effects of “mura.” Visual controls also enable team members to self-organize to pick work and collaborate together without planning or specific management direction or intervention. These visual controls are often referred to as “card walls” or sometimes (incorrectly) as “kanban boards.”
A kanban system is a practice adopted from Lean manufacturing. It uses a system of physical cards to limit the quantity of work-in-progress at any given stage in the workflow. Such work-in-progress limited systems create a “pull” where new work is started only when there are free kanban indicating that new work can be “pulled” into a particular state and work can progress on it.In Lean Software Development, the kanban are virtual and often tracked by setting a maximum number for a given step in the workflow of a work item type. In some implementations, electronic systems keep track of the virtual kanban and provide a signal when new work can be started. The signal can be visual or in the form of an alert such as an email.Virtual kanban systems are often combined with visual controls to provide a visual virtual kanban system representing the workflow of one or several work item types. Such systems are often referred to as “kanban boards” or “electronic kanban systems.” A visual virtual kanban system is available as a plug-in for Team Foundation Server, called Visual WIP[20]. This project was developed as open source by Hakan Forss in Sweden.
Lean Software Development requires that work is either undertaken in small batches, often referred to as “iterations” or “increments,” or that work items flow independently, referred to as “single-piece flow.” Single-piece flow requires a sophisticated configuration management strategy to enable completed work to be delivered while incomplete work is not released accidentally. This is typically achieved using branching strategies in the version control system. A small batch of work would typically be considered a batch that can be undertaken by a small team of 8 people or less in under 2 weeks.Small batches and single-piece flow require frequent interaction with business owners to replenish the backlog or queue or work. They also require a capability to release frequently. To enable frequent interaction with business people and frequent delivery, it is necessary to shrink the transaction and coordination costs of both activities. A common way to achieve this is the use of automation.
Lean Software Development expects a high level of automation to economically enable single-piece flow and to encourage high quality and the reduction of failure demand. The use of automated testing, automated deployment, and software factories to automate the deployment of design patterns and creation of repetitive low variability sections of source code will all be commonplace in Lean Software Development processes.
In Lean literature, the term kaizen means “continuous improvement” and a kaizen event is the act of making a change to a process or tool that hopefully results in an improvement.Lean Software Development processes use several different activities to generate kaizen events. These are listed here. Each of these activities is designed to stimulate a conversation about problems that adversely affect capability and, consequently, ability to deliver against demand. The essence of kaizen in knowledge work is that we must provoke conversations about problems across groups of people from different teams and with different skills.
Teams of software developers, often up to 50, typically meet in front of a visual control system such as a whiteboard displaying a visualization of their work-in-progress. They discuss the dynamics of flow and factors affecting the flow of work. Particular focus is made to externally blocked work and work delayed due to bugs. Problems with the process often become evident over a series of standup meetings. The result is that a smaller group may remain after the meeting to discuss the problem and propose a solution or process change. A kaizen event will follow. These spontaneous meetings are often referred to as spontaneous quality circles in older literature. Such spontaneous meetings are at the heart of a truly kaizen culture. Managers will encourage the emergence of kaizen events after daily standup meetings in order to drive adoption of Lean within their organization.
Project teams may schedule regular meetings to reflect on recent performance. These are often done after specific project deliverables are complete or after time-boxed increments of development known as iterations or sprints in Agile software development.Retrospectives typically use an anecdotal approach to reflection by asking questions like “what went well?”, “what would we do differently?”, and “what should we stop doing?”Retrospectives typically produce a backlog of suggestions for kaizen events. The team may then prioritize some of these for implementation.
An operations review is typically larger than a retrospective and includes representatives from a whole value stream. It is common for as many as 12 departments to present objective, quantitative data that show the demand they received and reflect their capability to deliver against the demand. Operations reviews are typically held monthly. The key differences between an operations review and a retrospective is that operations reviews span a wider set of functions, typically span a portfolio of projects and other initiatives, and use objective, quantitative data. Retrospectives, in comparison, tend to be scoped to a single project; involve just a few teams such as analysis, development, and test; and are generally anecdotal in nature.An operations review will provoke discussions about the dynamics affecting performance between teams. Perhaps one team generates failure demand that is processed by another team? Perhaps that failure demand is disruptive and causes the second team to miss their commitments and fail to deliver against expectations? An operations review provides an opportunity to discuss such issues and propose changes. Operations reviews typically produce a small backlog of potential kaizen events that can be prioritized and scheduled for future implementation.

There is no such thing as a single Lean Software Development process. A process could be said to be Lean if it is clearly aligned with the values and principles of Lean Software Development. Lean Software Development does not prescribe any practices, but some activities have become common. Lean organizations seek to encourage kaizen through visualization of workflow and work-in-progress and through an understanding of the dynamics of flow and the factors (such as bottlenecks, non-instant availability, variability, and waste) that affect it. Process improvements are suggested and justified as ways to reduce sources of variability, eliminate waste, improve flow, or improve value delivery or risk management. As such, Lean Software Development processes will always be evolving and uniquely tailored to the organization within which they evolve. It will not be natural to simply copy a process definition from one organization to another and expect it to work in a different context. It will also be unlikely that returning to an organization after a few weeks or months to find the process in use to be the same as was observed earlier. It will always be evolving.

The organization using a Lean software development process could be said to be Lean if it exhibited only small amounts of waste in all three forms (“mura,” “muri,” and “muda”) and could be shown to be optimizing the delivery of value through effective management of risk. The pursuit of perfection in Lean is always a journey. There is no destination. True Lean organizations are always seeking further improvement.

Lean Software Development is still an emerging field, and we can expect it to continue to evolve over the next decade.

  1. Anderson, David J., Kanban: Successful Evolutionary Change for your Technology Business, Blue Hole Press, 2010
  2. Anderson, David J., Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, Prentice Hall PTR, 2003
  3. Womack, James P., Daniel T. Jones and Daniel Roos, The Machine That Changed the World: The Story of Lean Production, 2007 updated edition, Free Press, 2007
  4. Womack, James P., and Daniel T. Jones, Lean Thinking: Banish Waste and Create Wealth in your Corporation, 2nd Edition, Free Press, 2003
  5. Beck, Kent et al, The Manifesto for Agile Software Development, 2001 http://www.agilemanifesto.org/
  6. Highsmith, James A., Agile Software Development Ecosystems, Addison Wesley, 2002
  7. Poppendieck, Mary and Tom Poppendieck, Lean Software Development: An Agile Toolkit, Addison Wesely, 2003
  8. Poppendieck, Mary and Tom Poppendieck, Implementing Lean Software Development: From Concept to Cash, Addison Wesley, 2006
  9. Poppendieck, Mary and Tom Poppendieck, Leading Lean Software Development: Results are not the Point, Addison Wesley, 2009
  10. Reinertsen, Donald G., Managing the Design Factory, Free Press, 1997
  11. Reinertsen, Donald G., The Principles of Product Development Flow: Second Generation Lean Product Development, Celeritas Publishing, 2009
  12. Sutton, James and Peter Middleton, Lean Software Strategies: Proven Techniques for Managers and Developers, Productivity Press, 2005
  13. Anderson, David J., Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, Prentice Hall PTR, 2003
  14. Shalloway, Alan, and Guy Beaver and James R. Trott, Lean-Agile Software Development: Achieving Enterprise Agility, Addison Wesley, 2009
  15. Larman, Craig and Bas Vodde, Scaling Lean & Agile Development: Thinking and Organizational Tools for Large-scale Scrum, Addison Wesley Professional, 2008
  16. Practices for Scaling Lean & Agile Development: Large, Multisite, and Offshore Product Development with Large-Scale Scrum, Addison Wesley Professional, 2010
  17. Coplien, James O. and Gertrud Bjornvig, Lean Architecture: for Agile Software Development, Wiley, 2010
  18. http://leansystemssociety.org/credo/
  19. http://lssc12.leanssc.org/
  20. http://hakanforss.wordpress.com/2010/11/23/visual-wip-a-kanban-board-for-tfs/

Agile Software Development: Eight Tips for Better Code Testing

Agile Software Development: Eight Tips for Better Code Testing

You know about agile software development, wherein coding is quick and continuous. Due to continual releases and ongoing development, testing is an integral part of agile development. Without testing the builds more frequently and effectively, you cannot ensure the quality of the build. There are a few challenges faced by agile testers:

  • Creating daily builds and testing them
  • Collecting requirements and the amount of time committed
  • Keeping the meetings short and code inspections long

An agile tester should be highly proficient with his tools, be a team player, and have good coding skills. Here are eight tips for you to be more efficient in agile software testing.agile software testing

Tips for Better Agile Software Code Testing

1. Modify Your Character Traits

Successful agile testers have specific characters and mindsets. You should be passionate about coding, creative to some extent, and be forthcoming with your opinions. Soft skills are important: in communication, management, and leadership. Agile development and testing requires you to know the clients’ expectations before the delivery of the program.

2. Learn How the Data Flows Through the Application

In order to analyze your application and know how it works, first learn how the data flow inside it. Knowing the data flow will tell you volumes about the components and how they interact with each other. It will also give important information on the data security of the application. The data flow knowledge is very important to recognize and report defects in your app.

3. Application Log Analysis

AUT (the application under test) needs you to analyze the logs, especially in the case of agile testing. These logs give you a lot of information on the system architecture of the AUT. You may have heard about “silent errors.” These errors don’t show their effects to the end users immediately. Log analysis is your friend if you want to spot silent errors faster and be more useful to the development team.

 4. Change- and Risk-Based Testing

 In an agile environment, software coding and testing happens fast. The marketing time for the application is very important here, and both the development and testing teams work together to achieve minimum go-to-market times. In this environment, understanding the parts of the application that are being changed in each modification is important. If you can estimate the overall effect of this change, you can better spot bugs and errors.agile software development5. Know the Objectives

You, the agile tester, have to perceive the application as an end user. Use it in the way an end user would. This means, in order to come up with the best testing strategy, you should understand the key areas, parts, or features of the application that an end user is more likely to use. You may also need separate strategies for product architecture. The end-user focus may help you test for the sake of the application’s business objectives. This means, you can easily prioritize the defects. Meeting the needs of the end user is the most important aspect of software development anyway.

6. Use Browser Plugins and Tools

Agile testers may from time to time realize the value of browser tools. Google Chrome and Mozilla Firefox browsers come with developer tools within them. These plugins allow the tester to spot errors quickly. You can also use a third-party plugin (an example is FireBug) to test.

7. Repositories of Requirements

You have to know the type of agile strategy that your organization uses: Agile Unified Process (AUP), Adaptive Software Development (ADP), Scrum, Kanban, etc. The testing and development team may create documents on test cases, and you should analyze all the documentation. After a long time, you may find the requirements and test scenarios are gathered into a large repository, from which you can gather quite a bit of information.

8. Test Early, Often, and Always

Exploratory Testing (ET) is the sort of testing in which the process is instantaneous. ET is an important agile process. In order to develop and deliver an application, testing has to be done as early, as often, and as continuously as possible. Other testing types, such as functional and load testing, should also be incorporated into the project plan for more efficiency.

Conclusion

Agile development depends a lot of the stages of development. Hence it is more important than the end product. This is the reason why testing has become a major part of development. In the current agile development scenarios, unlike the olden times, software companies and professionals take a real-time look at the testing environments and cases.

IMNilTalaviya - Technology | WordPress | Health | Tips

You know about agile software development, wherein coding is quick and continuous. Due to continual releases and ongoing development, testing is an integral part of agile development. Without testing the builds more frequently and effectively, you cannot ensure the quality of the build. There are a few challenges faced by agile testers:

  • Creating daily builds and testing them
  • Collecting requirements and the amount of time committed
  • Keeping the meetings short and code inspections long

An agile tester should be highly proficient with his tools, be a team player, and have good coding skills. Here are eight tips for you to be more efficient in agile software testing. agile software testing

Tips for Better Agile Software Code Testing

1. Modify Your Character Traits

Successful agile testers have specific characters and mindsets. You should be passionate about coding, creative to some extent, and be forthcoming with your opinions. Soft skills are important: in communication…

View original post 658 more words

Programming Languages Supported by Selenium 2

Selenium

Selenium is a portable software testing framework for web applications. Selenium provides a record/playback tool for authoring tests without learning a test scripting language (Selenium IDE).

Supporting languages

  • Java

  • C#

  • PHP

  • Python

  • Perl

  • Ruby

Java (software platform)

History

The Java platform and language began as an internal project at Sun Microsystems in December 1990, providing an alternative to the C++/C programming languages. Engineer Patrick Naughtonhad become increasingly frustrated with the state of Sun’s C++ and C application programming interfaces (APIs) and tools. While considering moving to Next, Naughton was offered a chance to work on new technology and thus the StealthProject was started.

The Stealth Project was soon renamed to the Green Project with James Gosling and Mike Sheridan joining Naughton. Together with other engineers, they began work in a small office on Sand Hill Road in Menlo Park, California. They were attempting to develop a new technology for programming next generation smart appliances, which Sun expected to be a major new opportunity.

The team originally considered using C++, but it was rejected for several reasons. Because they were developing an embedded system with limited resources, they decided that C++ needed too much memory and that its complexity led to developer errors. The language’s lack of garbage collection meant that programmers had to manually manage system memory, a challenging and error-prone task. The team was also troubled by the language’s lack of portable facilities for security, distributed programming, and threading. Finally, they wanted a platform that could be easily ported to all types of devices.

Bill Joy had envisioned a new language combining Mesa and C. In a paper called Further, he proposed to Sun that its engineers should produce an object-oriented environment based on C++. Initially, Gosling attempted to modify and extend C++ (that he referred to as “C++ ++ –“) but soon abandoned that in favor of creating a new language, which he called Oak, after the tree that stood just outside his office.

In 1994, the language was renamed Java after a trademark search revealed that Oak was used by Oak Technology. Although Java 1.0a was available for download in 1994, the first public release of Java was 1.0a2 with the Hot Java browser on May 23, 1995, announced by Gage at the Sun World conference. 

On January 9, 1996, the Java Soft group was formed by Sun Microsystems to develop the technology.

There were five primary goals in the creation of the Java language:

  1. It should be “simple, object-oriented and familiar”

  2. It should be “robust and secure”

  3. It should be “architecture-neutral and portable”

  4. It should execute with “high performance”

  5. It should be “interpreted, threaded, and dynamic”

Versions

Major release versions of Java, along with their release dates:

  • JDK 1.0 (January 21, 1996)

  • JDK 1.1 (February 19, 1997)

  • J2SE 1.2 (December 8, 1998)

  • J2SE 1.3 (May 8, 2000)

  • J2SE 1.4 (February 6, 2002)

  • J2SE 5.0 (September 30, 2004)

  • Java SE 6 (December 11, 2006)

  • Java SE 7 (July 28, 2011)

  • Java EE 7 (October 27, 2013)

Java platform

One characteristic of Java is portability, which means that computer programs written in the Java language must run similarly on any hardware/operating-system platform. This is achieved by compiling the Java language code to an intermediate representation called Java byte code, instead of directly to platform-specific machine code. Standardized libraries provide a generic way to access host-specific features such as graphics, threading, and networking.

A major benefit of using byte code is porting. However, the overhead of interpretation means that interpreted programs almost always run more slowly than programs compiled to native executables would. Just-in-Time (JIT) compilers were introduced from an early stage that compiles byte codes to machine code during runtime.

Performance

Programs written in Java have a reputation for being slower and requiring more memory than those written in C++. However, Java programs’ execution speed improved significantly with the introduction of Just-in-time compilation in 1997/1998 for Java 1.1, the addition of language features supporting better code analysis (such as inner classes, the String Builder class, optional assertions, etc.), and optimizations in the Java virtual machine itself, such as Hotspot becoming the default for Sun’s JVM in 2000.

Java uses an automatic garbage collector to manage memory in the object lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, the unreachable memory becomes eligible to be freed automatically by the garbage collector. Something similar to a memory leak may still occur if a programmer’s code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use. If methods for a nonexistent object are called, a “null pointer exception” is thrown.

One of the ideas behind Java’s automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on the stack, or explicitly allocated and deallocated from the heap. In the latter case the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, a memory leak occurs. If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable and/or crash. This can be partially remedied by the use of smart pointers, but these add overhead and complexity. Note that garbage collection does not prevent “logical” memory leaks, i.e. those where the memory is still referenced but never used.

Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java.

Java does not support C/C++ style pointer arithmetic, where object addresses and unsigned integers (usually long integers) can be used interchangeably. This allows the garbage collector to relocate referenced objects and ensures type safety and security.

As in C++ and some other object-oriented languages, variables of Java’s primitive data types are not objects. Values of primitive types are either stored directly in fields (for objects) or on the stack (for methods) rather than on the heap, as is commonly true for objects. This was a conscious decision by Java’s designers for performance reasons. Because of this, Java was not considered to be a pure object-oriented programming language. However, as of Java 5.0, auto boxing enables programmers to proceed as if primitive types were instances of their wrapper class.

Syntax

The syntax of Java is largely derived from C++. Unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built almost exclusively as an object-oriented language. All code is written inside a class, and everything is an object, with the exception of the primitive data types (i.e. integers, floating-point numbers, boolean values, and characters), which are not classes for performance reasons.Unlike C++, Java does not support operator overloading or multiple inheritance for classes.

Special classes

Applet

Java applets are programs that are embedded in other applications, typically in a Web page displayed in a Web browser.

Servlet

Java Servlet technology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. Servlets are server-side Java EE components that generate responses (typically HTML pages) to requests (typically HTTP requests) from clients. A servlet can almost be thought of as an applet that runs on the server side—without a face.

JavaServer Pages

JavaServer Pages (JSP) are server-side Java EE components that generate responses, typically HTML pages, to HTTP requests from clients. JSPs embed Java code in an HTML page by using the special delimiters <% and %>. A JSP is compiled to a Java servlet, a Java application in its own right, the first time it is accessed. After that, the generated servlet creates the response.

Swing application

Swing is a graphical user interface library for the Java SE platform. It is possible to specify a different look and feel through the pluggable look and feel system of Swing. Swing in Java SE 6 addresses this problem by using more native GUI widget drawing routines of the underlying platforms.

Example

class HelloWorldApp {

public static void main(String[] args)

{

System.out.println(“Hello World!”);

}

}

Output: Hello World!

JavaScript

JavaScript (JS) is an interpreted computer programming language.[5] As part of web browsers, implementations allow client-side scripts to interact with the user, control the browser, communicate asynchronously, and alter the document content that is displayed.[5] It has also become common in server-side programming, game development and the creation of desktop applications.

JavaScript is a prototype-based scripting language with dynamic typing and has first-class functions. Its syntax was influenced by C. JavaScript copies many names and naming conventions from Java, but the two languages are otherwise unrelated and have very different semantics.

History

Birth at Netscape

JavaScript was originally developed by Brendan Eich. While battling with Microsoft over the Web, Netscape considered their client-server offering a distributed OS, running a portable version of Sun Microsystems’ Java. Because Java was a competitor of C++ and aimed at professional programmers, Netscape also wanted a lightweight interpreted language that would complement Java by appealing to nonprofessional programmers, like Microsoft’s Visual Basic (see JavaScript and Java).

Server-side JavaScript

Netscape introduced an implementation of the language for server-side scripting (SSJS) with Netscape Enterprise Server, first released in December, 1994 (soon after releasing JavaScript for browsers). Since the mid-2000s, there has been a proliferation of server-side JavaScript implementations. Node.js is one recent notable example of server-side JavaScript being used in real-world applications.

Adoption by Microsoft

JavaScript very quickly gained widespread success as a client-side scripting language for web pages. Microsoft introduced JavaScript support in its own web browser, Internet Explorer, in version 3.0, released in August 1996. Microsoft’s webserver, Internet Information Server, introduced support for server-side scripting in JavaScript with release 3.0 (1996). Microsoft started to promote webpage scripting using the umbrella term Dynamic HTML.

Standardization

In November 1996, Netscape announced that it had submitted JavaScript to Ecma International for consideration as an industry standard, and subsequent work resulted in the standardized version named ECMAScript. In June 1997, Ecma International published the first edition of the ECMA-262 specification. A year later, in June 1998, some modifications were made to adapt it to the ISO/IEC-16262 standard, and the second edition was released. The third edition of ECMA-262 (published on December 1999) is the version most browsers currently use.

Development of what would have been a fourth edition of the ECMAScript standard was ultimately never completed and no fourth edition was released. The fifth edition was released in December 2009. The current edition of the ECMAScript standard is 5.1, released in June 2011.

Later developments

JavaScript has become one of the most popular programming languages on the web. Initially, however, many professional programmers denigrated the language because its target audience consisted of web authors and other such “amateurs”, among other reasons. The advent of Ajax returned JavaScript to the spotlight and brought more professional programming attention. The result was a proliferation of comprehensive frameworks and libraries, improved JavaScript programming practices, and increased usage of JavaScript outside of web browsers, as seen by the proliferation of server-side JavaScript platforms.

Features

Imperative and structured

JavaScript supports much of the structured programming syntax from C. Dynamic[edit]

Dynamic typing

As in most scripting languages, types are associated with values, not with variables. For example, a variable x could be bound to a number, then later rebound to a string.

Object based

JavaScript is almost entirely object-based. JavaScript objects are associative arrays, augmented with prototypes. Object property names are string keys. They support two equivalent syntaxes: dot notation (obj.x = 10) and bracket notation (obj[‘x’] = 10). Properties and their values can be added, changed, or deleted at run-time. JavaScript has a small number of built-in objects such as Function and Date.

Run-time evaluation

JavaScript includes an eval function that can execute statements provided as strings at run-time.

Functional

First-class functions

Functions are first-class; they are objects themselves. As such, they have properties and methods, such as .call() and .bind().

Prototypes

JavaScript uses prototypes where many other object oriented languages use classes for inheritance.

Functions as object constructors

Functions double as object constructors along with their typical role. Prefixing a function call with new will create an instance of a prototype, inheriting properties and methods from the constructor (including properties from the Object prototype).

Functions as methods

Unlike many object-oriented languages, there is no distinction between a function definition and a method definition. Rather, the distinction occurs during function calling; when a function is called as a method of an object, the function’s local this keyword is bound to that object for that invocation.

JavaScript is a Delegation Language.

Type Composition and Inheritance

Whereas explicit function based delegation does cover composition in JavaScript, implicit delegation already happens every time the prototype chain is walked in order to e.g. find a method that might be related to but is not directly owned by an object. Once the method was found it gets called within this objects context. Thus inheritance in JavaScript is covered by a delegation automatism that is bound to the prototype property of constructor functions.

Miscellaneous

Run-time environment

JavaScript typically relies on a run-time environment (e.g. a web browser) to provide objects and methods by which scripts can interact with the environment (e.g. a webpage DOM). It also relies on the run-time environment to provide the ability to include/import scripts (e.g. HTML <script> elements). This is not a language feature per se, but it is common in most JavaScript implementations.

Variadic functions

An indefinite number of parameters can be passed to a function. The function can access them through formal parameters and also through the local arguments object. Variadic functions can also be created by using the apply method.

Array and object literals

Like many scripting languages, arrays and objects (associative arrays in other languages) can each be created with a succinct shortcut syntax. In fact, these literals form the basis of the JSON data format.

Regular expressions

JavaScript also supports regular expressions in a manner similar to Perl, which provide a concise and powerful syntax for text manipulation that is more sophisticated than the built-in string functions.

Example Script:

<meta charset=”utf-8″>

<title>Minimal Example</title>

<h1 id=”header”>This is JavaScript</h1>

<script>

document.body.appendChild(document.createTextNode(‘Hello World!’));

var h1 = document.getElementById(‘header’); // holds a reference to the <h1> tag

h1 = document.getElementsByTagName(‘h1’)[0]; // accessing the same <h1> element

</script>

C Sharp

C♯(pronounced see sharp) is a multi-paradigm programming language encompassing strong typing, imperative, declarative, functional,procedural, generic, object-oriented (class-based), and component-oriented programming disciplines. It was developed by Microsoft within its .NET initiative and later approved as a standard by Ecma (ECMA-334) and ISO (ISO/IEC 23270:2006). C# is one of the programming languages designed for the Common Language Infrastructure.C♯ is intended to be a simple, modern, general-purpose, object-oriented programming language.

History

During the development of the .NET Framework, the class libraries were originally written using a managed code compiler system called Simple Managed C (SMC).In January 1999,Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood for “C-like Object Oriented Language”. Microsoft had considered keeping the name “Cool” as the final name of the language, but chose not to do so for trademark reasons. By the time the .NET project was publicly announced at the July 2000 Professional Developers Conference, the language had been renamed C#, and the class libraries and ASP.NET runtime had been ported to C#.

C#’s principal designer and lead architect at Microsoft is Anders Hejlsberg, who was previously involved with the design of Turbo Pascal, Embarcadero Delphi (formerly CodeGear Delphi, Inprise Delphi and Borland Delphi), and Visual J++. In interviews and technical papers he has stated that flawsin most major programming languages (e.g. C++, Java, Delphi, and Smalltalk) drove the fundamentals of the Common Language Runtime (CLR), which, in turn, drove the design of the C# language itself.

James Gosling, who created the Java programming language in 1994, and Bill Joy, a co-founder of Sun Microsystems, the originator of Java, called C# an “imitation” of Java; Gosling further said that “[C# is] sort of Java with reliability, productivity and security deleted. Klaus Kreft and Angelika Langer (authors of a C++ streams book) stated in a blog post that “Java and C# are almost identical programming languages. Boring repetition that lacks innovation, Hardly anybody will claim that Java or C# are revolutionary programming languages that changed the way we write programs,” and “C# borrowed a lot from Java – and vice versa. Now that C# supports boxing and unboxing, we’ll have a very similar feature in Java. In July 2000, Anders Hejlsberg said that C# is “not a Java clone” and is “much closer to C++” in its design.

Since the release of C# 2.0 in November 2005, the C# and Java languages have evolved on increasingly divergent trajectories, becoming somewhat less similar. One of the first major departures came with the addition of generics to both languages, with vastly different implementations

Version

Version

CLR[28]

Language specification

Date

.NET Framework

Visual Studio

ECMA

ISO/IEC

Microsoft

C# 1.0

1.0

December 2002

April 2003

January 2002

January 2002

.NET Framework 1.0

Visual Studio .NET 2002

C# 1.2

1.1

October 2003

April 2003

.NET Framework 1.1

Visual Studio .NET 2003

C# 2.0

2.0

June 2006

September 2006

September 2005[A]

November 2005

.NET Framework 2.0

Visual Studio 2005

C# 3.0

2.0
2.0 SP1

None[B]

August 2007

November 2007

.NET Framework 2.0 (Except LINQ/Query Extensions)[29]
.NET Framework 3.0 (Except LINQ/Query Extensions)[29]
.NET Framework 3.5

Visual Studio 2008
Visual Studio 2010

C# 4.0

4.0[C]

April 2010

April 2010

.NET Framework 4

Visual Studio 2010

C# 5.0

4.5[D]

June 2013

August 2012

.NET Framework 4.5

Visual Studio 2012

Syntax

Syntax – the form – is contrasted with semantics – the meaning. In processing computer languages, semantic processing generally comes after syntactic processing, but in some cases semantic processing is necessary for complete syntactic analysis, and these are done together or concurrently. In a compiler, the syntactic analysis comprises the frontend, while semantic analysis comprises the backend .There are 3 levels,

  • Words – the lexical level, determining how characters form tokens;

  • Phrases – the grammar level, narrowly speaking, determining how tokens form phrases;

  • Context – determining what objects or variables names refer to, if types are valid, etc.

Characteristics of C#

  • The C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.

  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.

  • The language is intended for use in developing software components suitable for deployment in distributed environments.

  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.

  • Support for internationalization is very important.

  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.

  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Common type system

C# has a unified type system. This unified type system is called Common Type System. A unified type system implies that all types, including primitives such as integers, are subclasses of the System. Object class. For example, every type inherits a To String() method.

CTS separate data types into two categories:

  1. Reference types

  2. Value types

Boxing and unboxing

Boxing is the operation of converting a value-type object into a value of a corresponding reference type. Boxing in C# is implicit. Unboxing is the operation of converting a value of a reference type (previously boxed) into a value of a value type.

Generics

Generics were added to version 2.0 of the C# language. Generics use type parameters, which make it possible to design classes and methods that do not specify the type used until the class or method is instantiated. The main advantage is that one can use generic type parameters to create classes and methods that can be used without incurring the cost of runtime casts or boxing operations.

Code:

using System;

class Program

{

static void Main()

{

Console.WriteLine(“Hello world!”);

}

}Output: Hello World!

Python (programming language)

Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C. The language provides constructs intended to enable clear programs on both a small and large scale.

Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic system and automatic memory management and has a large and comprehensive standard library.

History and Versions

Python was conceived in the late 1980sand its implementation was started in December 1989 by Guido van Rossum at CWI in the Netherlands as a successor to the ABC language (itself inspired by SETL capable of exception handling and interfacing with the Amoeba operating system. Van Rossum is Python’s principal author, and his continuing central role in deciding the direction of Python is reflected in the title given to him by the Python community,Benevolent (BDFL).

Python 2.0 was released on 16 October 2000, with many major new features including a full garbage collector and support for Unicode. With this release the development process was changed and became more transparent and community-backed.

Python 3.0 (also called Python 3000 or py3k), a major, backwards-incompatible release, was released on 3 December 2008 after a long period of testing. Many of its major features have been back ported to the backwards-compatible Python 2.6 and 2.7.

Features

Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming (including by metaprogramming and by magic methods). Many other paradigms are supported using extensions, including design by contract and logic programming.

Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. An important feature of Python is dynamic name resolution (late binding), which binds method and variable names during program execution.

The design of Python offers only limited support for functional programming in the Lisp tradition. The language has map(), reduce() and filter() functions, comprehensions for lists, dictionaries, and sets, as well as generator expressions.

The core philosophy of the language is summarized by the document “PEP 20 (The Zen of Python)”, which includes aphorisms such as:

  • Beautiful is better than ugly.

  • Explicit is better than implicit.

  • Simple is better than complex.

  • Complex is better than complicated.

  • Readability counts.

Syntax and semantics

Python is intended to be a highly readable language. Python has a smaller number of syntactic exceptions and special cases than C or Pascal.

Indentation

Python uses whitespace indentation, rather than curly braces or keywords, to delimit blocks; a feature also termed the off-side rule. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block.

Statements and control flow

Python’s statements include (among others):

The if statement, which conditionally executes a block of code, along with else and else-if (a contraction of else-if).

The for statement, which iterates over an iterable object, capturing each element to a local variable for use by the attached block.

The while statement, which executes a block of code as long as its condition is true.

The try statement, which allows exceptions raised in its attached code block to be caught and handled by except clauses; it also ensures that clean-up code in a finally block will always be run regardless of how the block exits.

The def statement, which defines a function or method.

Expressions

Python expressions are similar to languages such as C and Java

  • In Python, == compares by value, in contrast to Java, where it compares by reference. (Value comparisons in Java use the equals () method.) Python’s is operator may be used to compare object identities (comparison by reference). Comparisons may be chained, for example a <= b <= c.

  • Python uses the words and, or, not for its boolean operators rather than the symbolic &&, ||, ! used in Java and C.

Methods

Methods on objects are functions attached to the object’s class; the syntax instance. Method (argument) is, for normal methods and functions, syntactic sugar for Class. Method (instance, argument) Python methods have an explicit self parameter to access instance data, in contrast to the implicit self in some other object-oriented programming languages for example, Java, C++ or Ruby.

Mathematics

Python has the usual C arithmetic operators (+, -, *, /, %). It also has ** for exponentiation, e.g. 5**3 == 125 and 9**.5 == 3.0.

Python prompt:

>>> print “Hello, Python!”;

Output: Hello, Python!

Perl

Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6.

Though Perl is not officially an acronym, there are various backronyms in use, such as: Practical Extraction and Reporting Language. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions. The latest major stable revision of Perl 5 is 5.18, released in May 2013. Perl 6, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from one another.

The Perl languages borrow features from other programming languages including C, shell scripting, AWK, and sed. They provide powerful text processing facilities without the arbitrary data-length limits of many contemporary Unix commandline tools, facilitating easy manipulation of text files. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its parsing abilities.

History

Early versions

Wall began work on Perl in 1987, while working as a programmer at Unisys, and released version 1.0 to the comp.sources.misc newsgroup on December 18, 1987. The language expanded rapidly over the next few years.

Perl 2, released in 1988, featured a better regular expression engine. Perl 3, released in 1989, added support for binary data streams.

Perl 4 went through a series of maintenance releases, culminating in Perl 4.036 in 1993. At that point, Wall abandoned Perl 4 to begin work on Perl 5. Initial design of Perl 5 continued into 1994. The perl5-porters mailing list was established in May 1994 to coordinate work on porting Perl 5 to different platforms. It remains the primary forum for development, maintenance, and porting of Perl 5.

Perl 5.000 was released on October 17, 1994 and continues with several version.

In late 2012 and 2013 several projects for alternative implementations for Perl 5 started: Perl5 in Perl6 by the Rakudo Perl team.

Name

Perl was originally named “Pearl”.

Camel symbol

Programming Perl, published by O’Reilly Media, features a picture of a dromedary camel on the cover and is commonly called the “Camel Book”.

Onion symbol

The Perl Foundation owns an alternative symbol, an onion, which it licenses to its subsidiaries, Perl Mongers, Perl Monks, Perl.org, and others.

Features

The overall structure of Perl derives broadly from C. Perl is procedural in nature, with variables, expressions, assignment statements, brace-delimited blocks, control structures, and subroutines.

Perl also takes features from shell programming. All variables are marked with leading sigils, which unambiguously identify the data type (for example, scalar, array, hash) of the variable in context. Importantly, sigils allow variables to be interpolated directly into strings. Perl has many built-in functions that provide tools often used in shell programming (although many of these tools are implemented by programs external to the shell) such as sorting, and calling on operating system facilities.

Perl 5 added features that support complex data structures, first-class functions (that is, closures as values), and an object-oriented programming model. These include references, packages, class-based method dispatch, and lexically scoped variables, along with compiler directives (for example, the strict pragma). A major additional feature introduced with Perl 5 was the ability to package code as reusable modules. Wall later stated that “The whole intent of Perl 5’s module system was to encourage the growth of Perl culture rather than the Perl core.”

All versions of Perl do automatic data-typing and automatic memory management. The interpreter knows the type and storage requirements of every data object in the program; it allocates and frees storage for them as necessary using reference counting (so it cannot deallocate circular data structures without manual intervention). Legal type conversions — for example, conversions from number to string — are done automatically at run time; illegal type conversions are fatal errors.

Applications

Perl has many and varied applications, compounded by the availability of many standard and third-party modules.

Perl has chiefly been used to write CGI scripts. Perl is often used as a glue language, tying together systems and interfaces that were not specifically designed to interoperate, and for “data munging”, that is, converting or processing large amounts of data for tasks such as creating reports. In fact, these strengths are intimately linked.

Graphical user interfaces (GUIs) may be developed using Perl. For example, Perl/Tk and WxPerl are commonly used to enable user interaction with Perl scripts.

Implementation

Perl is implemented as a core interpreter, written in C, together with a large collection of modules, written in Perl and C. As of 2010, the stable version (5.18.1) is 16.53 MB when packaged in a tar file and gzip compressed. The interpreter is 150,000 lines of C code and compiles to a 1 MB executable on typical machine architectures. Alternatively, the interpreter can be compiled to a link library and embedded in other programs. There are nearly 500 modules in the distribution, comprising 200,000 lines of Perl and an additional 350,000 lines of C code. (Much of the C code in the modules consists of character encoding tables.)

The interpreter has an object-oriented architecture. All of the elements of the Perl language-scalars, arrays, hashes, coderefs, file handles—are represented in the interpreter by C structs.

Optimizing

Because Perl is an interpreted language, it can give problems when efficiency is critical; in such situations, the most critical routines can be written in other languages such as C, which can be connected to Perl via simple Inline modules or the more complex but flexible XS mechanism.

Perl on IRC

There are a number of IRC channels that offer support for the language and some modules.

IRC Network

Channels

irc.freenode.net

#perl #perl6 #cbstream #perlcafe #poe

irc.perl.org

#moose #poe #catalyst #dbix-class #perl-help #distzilla #epo #corehackers #sdl #win32

irc.slashnet.org

#perlmonks

irc.oftc.net

#perl

irc.efnet.net

#perlhelp

irc.rizon.net

#perl

irc.debian.org

#debian-perl

Example Code

In older versions of Perl, one would write the Hello World program as:

print“Hello World!\n;

In later versions, which support the say statement, one can also write it as:

use5.010;

say “Hello World!”;

Output: Hello World!

PHP

PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language. PHP is now installed on more than 244 million websites and 2.1 million web servers. Originally created by Rasmus Lerdorf in 1995, the reference implementationof PHP is now produced by The PHP Group. While PHP originally stood for Personal Home Page, it now stands for PHP: Hypertext Preprocessor, a recursive backronym.

PHP code is interpreted by a web server with a PHP processor module, which generates the resulting web page: PHP commands can be embedded directly into an HTML source document rather than calling an external file to process data. It has also evolved to include a command-line interfacecapability and can be used in standalone graphical applications.PHP is free software released under the PHP License.

History

PHP development began in 1994 when the developer Rasmus Lerdorf wrote a series of Common Gateway Interface (CGI) Perl scripts, which he used to maintain his personal homepage. The tools performed tasks such as displaying his résumé and recording his web traffic.

Zeev Suraski and Andi Gutmans rewrote the parser in 1997 and formed the base of PHP 3, changing the language’s name to the recursive acronym PHP: Hypertext Preprocessor. Afterwards, public testing of PHP 3 began, and the official launch came in June 1998. Suraski and Gutmans then started a new rewrite of PHP’s core, producing the Zend Engine in 1999. They also founded Zend Technologies in Ramat Gan, Israel.On May 22, 2000, PHP 4, powered by the Zend Engine 1.0, was released.On July 13, 2004, PHP 5 was released, powered by the new Zend Engine II. PHP 5 included new features such as improved support for object-oriented programming.

Version

Version

Release date

Supported until[30]

1.0

1995-06-08

2.0

1997-11-01

3.0

1998-06-06

2000-10-20

4.0

2000-05-22

2001-01-23

4.1

2001-12-10

2002-03-12

4.2

2002-04-22

2002-09-06

4.3

2002-12-27

2005-03-31

4.4

2005-07-11

2008-08-07

5.0

2004-07-13

2005-09-05

5.1

2005-11-24

2006-08-24

5.2

2006-11-02

2011-01-06

5.3

2009-06-30

2014-07

5.4

2012-03-01

3 years after release

5.5

2013-06-20

3 years after release

5.6

No date set

No date set

6

No date set

No date set

Features

PHP is a general-purpose scripting language that is especially suited to server-side web development where PHP generally runs on a web server. Any PHP code in a requested file is executed by the PHP runtime, usually to create dynamic web page content or dynamic images used on websites or elsewhere. It can also be used for command-line scripting and client-side graphical user interface (GUI) applications. PHP can be deployed on most web servers, many operating systems and platforms, and can be used with many relational database management systems (RDBMS). Most web hosting providers support PHP for use by their clients. It is available free of charge, and the PHP Group provides the complete source code for users to build, customize and extend for their own use.

Data types

PHP stores whole numbers in a platform-dependent range, either a 64-bit or 32-bit signed integer equivalent to the C-language long type. Unsigned integers are converted to signed values in certain situations; this behavior is different from other programming languages. Integer variables can be assigned using decimal (positive and negative), octal, hexadecimal, and binary notations.

Floating point numbers are also stored in a platform-specific range. They can be specified using floating point notation, or two forms of scientific notation.Using the Boolean type conversion rules, non-zero values are interpreted as true and zero as false, as in Perl and C++.

Functions

PHP has hundreds of base functions and thousands more via extensions. These functions are well documented on the PHP site; however, the built-in library has a wide variety of naming conventions and inconsistencies. One cause of the inconsistent functions naming is that early versions of PHP internally used string length as a hash function for function names, thus using inconsistent names made it easier to get a more uniform distribution of hash values. PHP currently has no functions for thread programming, although it does support multi process programming

function getAdder($x)

{

return function($y) use ($x)

{

return $x + $y;

};

}

$adder = getAdder(8);

echo $adder(2); // prints “10”

Objects

Basic object-oriented programming functionality was added in PHP 3 and improved in PHP 4. Object handling was completely rewritten for PHP 5, expanding the feature set and enhancing performance. In previous versions of PHP, objects were handled like value types. The drawback of this method was that the whole object was copied when a variable was assigned or passed as a parameter to a method. In the new approach, objects are referenced by handle, and not by value.

PHP 5 introduced private and protected member variables and methods, along with abstract classes, final classes, abstract methods, and final methods. It also introduced a standard way of declaring constructors and destructors, similar to that of other object-oriented languages such as C++, and a standard exception handling model. Furthermore, PHP 5 added interfaces and allowed for multiple interfaces to be implemented. There are special interfaces that allow objects to interact with the runtime system. Objects implementing Array Access can be used with array syntax and objects implementing Iterator or Iterator Aggregate can be used with the for each language construct.

Implementations

The PHP language was originally implemented as an interpreter, and this is still the most popular implementation. Several compilers have been developed which decouple the PHP language from the interpreter. Advantages of compilation include better execution speed, static analysis, and improved interoperability with code written in other languages.

PHP compilers of note include Phalanger, which compiles PHP into Common Intermediate Language (CIL) bytecode, and HipHop, developed at Facebook and now available as open source, which transforms the PHP Script into C++, then compiles it, reducing server load up to 50% .

PHP source code is compiled on-the-fly to an internal format that can be executed by the PHP engine. In order to speed up execution time and not have to compile the PHP source code every time the web page is accessed, PHP scripts can also be deployed in executable format using a PHP compiler.

Code

<!DOCTYPE html>

<meta charset=”utf-8″>

<title>PHP Test</title>

<?php

echo ‘Hello World’;

?>

Output: <?= ‘Hello world’;

Ruby (programming language)

Ruby is a dynamic, reflective, object-oriented, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro “Matz” Matsumoto in Japan.

History

Ruby was conceived on February 24, 1993. At a Google Tech Talk in 2008 Matsumoto further stated, “I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. Choice of the name “Ruby”[edit]

The name “Ruby” originated during an online chat session between Matsumoto and Keiju Ishitsuka on February 24, 1993, before any code had been written for the language. Initially two names were proposed: “Coral” and “Ruby”. Matsumoto chose the latter in a later e-mail to Ishitsuka. Matsumoto later noted a factor in choosing the name “Ruby” – it was the birthstone of one of his colleagues.

Version

The first public release of Ruby 0.95 was announced on Japanese domestic newsgroups on December 21, 1995.

Ruby reached version 1.0 on December 25, 1996.[14]

Ruby 1.2 was initially released in December 1998.

Ruby 1.4 was initially released in August 1999.

Ruby 1.6 was initially released in September 2000.

Ruby 1.8 was initially released in August 2003, was stable for a long time, and was retired June 2013.[10] Although deprecated, there is still code based on it. Ruby 1.8 is incompatible with Ruby 1.9.

Ruby 1.9 was released in December 2007. Ruby 1.9 introduces many significant changes over the 1.8 series.[

Ruby 2.0[edit]

Ruby 2.0 added several new features.

Ruby 2.0 is intended to be fully backward compatible with Ruby 1.9.3. As of the official 2.0.0 release on February 24, 2013, there were only five known (minor) incompatibilities.

Ruby 2.1.0 was released on Christmas Day in 2013. The release includes speed-ups, bugfixes, and library updates. Starting with 2.1.0, Ruby is using semantic versioning.

Features

  • Thoroughly object-oriented with inheritance, mixins and metaclasses

  • Dynamic typing and duck typing

  • Everything is an expression (even statements) and everything is executed imperatively (even declarations)

  • Succinct and flexible syntax that minimizes syntactic noise and serves as a foundation for domain-specific languages

  • Dynamic reflection and alteration of objects to facilitate metaprogramming

  • Lexical closures, iterators and generators, with a unique block syntax

  • Literal notation for arrays, hashes, regular expressions and symbols

  • Embedding code in strings (interpolation)

  • Default arguments

  • Four levels of variable scope (global, class, instance, and local) denoted by sigils or the lack thereof

  • Garbage collection

  • Strict boolean coercion rules (everything is true except false and nil)

  • Exception handling

  • Operator overloading

  • Built-in support for rational numbers, complex numbers and arbitrary-precision arithmetic

  • Custom dispatch behavior (through method_missing and const_missing)

  • Native threads and cooperative fibers

  • Initial support for Unicode and multiple character encodings (no ICU support)

  • Interactive Ruby Shell (a REPL)

  • Centralized package management through RubyGems

  • Implemented on all major platforms

  • Large standard library, including modules for YAML, JSON, XML, CGI, OpenSSL, HTTP, FTP, RSS, curses, zlib, and Tk.

Syntax

The syntax of Ruby is broadly similar to that of Perl and Python. Class and method definitions are signaled by keywords. In contrast to Perl, variables are not obligatorily prefixed with a sigil. When used, the sigil changes the semantics of scope of the variable. One difference from C and Perl is that keywords are typically used to define logical code blocks, without braces (i.e., pair of { and }). For practical purposes there is no distinction between expressions and statements. Line breaks are significant and taken as the end of a statement; a semicolon may be equivalently used. Unlike Python, indentation is not significant.

To implement the equivalent in many other languages, the programmer would have to write each method (in_black, in_red, in_green, etc.) separately.

Some other possible uses for Ruby meta-programming include:

  • intercepting and modifying method calls

  • implementing new inheritance models

  • dynamically generating classes from parameters

  • automatic object serialization

  • interactive help and debugging

Implementations

Ruby 1.9 has multiple implementations:

  • The official Ruby interpreter often referred to as the Matz’s Ruby Interpreter or MRI. This implementation is written in C and uses its own Ruby-specific virtual machine,

  • JRuby, a Java implementation that runs on the Java virtual machine,

  • Rubinius, a C++ bytecode virtual machine that uses LLVM to compile to machine code at runtime. The bytecode compiler and most core classes are written in pure Ruby.

Other Ruby implementations:

  • MagLev (software), a Smalltalk implementation on VMware’s GemStone VM

  • MacRuby, an OS X implementation on the Objective-C runtime

  • Cardinal, an implementation for the Parrot virtual machine

  • IronRuby, an implementation on the .NET Framework.

Code

puts “Hello World!”

Output: Hello World!