Cyber Attacks

In an ever-growing virtual environment known as cyberspace, individuals can communicate with one another or search for knowledge to broaden their own horizons. Traversing cyberspace is a new way of life, a way of life for all social classes to experience. Unfortunately, some individuals use cyberspace for their own devious actions, targeting unsuspecting individuals for their own enjoyment or for profit.

Known as cyber-attacks, this coined term can deal massive amounts of damage to individuals or on a larger scale, companies or government establishments. It does not stop there though, when government establishments or military establishments are attacked through cyber methods, it is a whole new kind of attack known as cyberwarfare or cyberterrorism. This is on a grand scale; whole sovereign nations can be affected and weakened by something that is not physically tangible

Emergency Response Team (ERT), actively monitors and mitigates attacks in real-time, identify trends to educate the security community. For actionable intelligence to detect and mitigate threats that plague organization’s infrastructure:

Understand weak points of server-based botnets that represent a new and powerful order in the DDoS environment

They define how to stop sophisticated attack campaigns with an Advanced Persistent Threat (APT) score, which ranks attacks by severity based on attack duration, number of attack vectors, and attack complexity

They find out why encrypted-layer attacks are often detected too late

See how adopting a three-phase (pre, during, post) security approach removes a vulnerable blind spot that attackers exploit to their advantage

Cyberwarfare utilizes techniques of defending and attacking information and computer networks that inhabit cyberspace. It denies an opponent’s ability to do the same, while employing technological instruments of war to attack an opponent’s critical computer systems.

Paralleling this idea of cyberwarfare, cyberterrorism is “the use of computer network tools to shut down critical national infrastructures (such as energy, transportation, government operations) or to coerce or intimidate a government or civilian population.”

That means the end result of both cyberwarfare and cyberterrorism is the same, to damage critical infrastructures and computer systems linked together within the confines of cyberspace.

Three Basic Factors for Cyber-Attacks

In cyberwarfare we must understand the basics as to why cyber-attacks are launched against a state or an individual. There are three factors that contribute to this reasoning, the fear factor, spectacular factor, and the vulnerability factor.

Fear Factor

The most common, fear factor, a cyberterrorist will create fear amongst individuals, groups, or societies. The bombing of a Bali nightclub in 2002 created fear amongst the foreign tourists who frequently visited the venue. Once the bomb went off and casualties ensued, the influx of tourists to Bali significantly reduced due to fear of death.

Spectacular Factor

With spectacular factors, it is the actual damage of the attack, meaning the attacks created direct losses and gained negative publicity. In 1999, a denial of service attack rendered unusable. Amazon experienced losses because of suspended trading and it was publicized worldwide.

Increasingly, U.S. banking institutions are reluctant to acknowledge – much less discuss – the ongoing distributed-denial-of-service attacks against their online services. Perhaps that’s because they’re concerned that consumers will panic or that revealing too much about the attacks could give hacktivists information they could use to enhance their DDoS abilities.

But in recent regulatory statements, the nation’s largest banks are candid about DDoS attacks and their impact. In their annual 10-K earnings reports, filed with the Securities and Exchange Commission, seven of the nation’s top 10 financial services institutions provide new details about the DDoS attacks they suffered in 2012. In its report, Citigroup even acknowledges that DDoS attacks have led to unspecified losses. Citigroup , which filed its 10-K report March 1, notes: “In 2012, Citi and other U.S. financial institutions experienced distributed-denial-of-service attacks which were intended to disrupt consumer online banking services. While Citi’s monitoring and protection services were able to detect and respond to these incidents before they became significant, they still resulted in certain limited losses in some instances as well as increases in expenditures to monitor against the threat of similar future cyber-incidents.” The bank also points out that these attacks are being waged by powerful adversaries. “Citi’s computer systems, software and networks are subject to ongoing cyber-incidents, such as unauthorized access; loss or destruction of data (including confidential client information); account takeovers; unavailability of service; computer viruses or other malicious code; cyber-attacks; and other events,” Citi states. “Additional challenges are posed by external extremist parties, including foreign state actors, in some circumstances as a means to promote political ends.” When contacted by BankInfoSecurity , Citi and other institutions did not comment further about DDoS attacks or the information in the 10-K reports.

These banks, as well as other U.S. financial institutions, are now in the midst of the third wave of DDoS attacks attributed to the hacktivist group Izz ad-Din al-Qassam Cyber Fighters – a group that has claimed since September that its attacks are being waged to protest a YouTube movie trailer deemed offensive to Muslims. ‘Technically Sophisticated’ In their 10-K reports, Citi, as well as JPMorgan Chase & Co. , Bank of America , Goldman Sachs Group , U.S. Bancorp , HSBC North America and Capital One acknowledge suffering from increased cyber-activity, with some specifically calling out DDoS as an emerging and ongoing threat. HSBC North America, in its 10-K report filed March 4, notes the global impact of DDoS on its customer base. “

During 2012, HSBC was subjected to several ‘denial of service’ attacks on our external facing websites across Latin America, Asia and North America,” the bank states. “One of these attacks affected several geographical regions for a number of hours; there was limited effect from the other attacks with services maintained. We did not experience any loss of data as a result of these attacks.” And U.S. Bank, in its 10-K filed Jan. 15, describes DDoS attacks as “technically sophisticated and well-resourced.”

“The company and several other financial institutions in the United States have recently experienced attacks from technically sophisticated and well-resourced third parties that were intended to disrupt normal business activities by making internet banking systems inaccessible to customers for extended periods,” U.S. Bank reports. “These ‘denial-of-service’ attacks have not breached the company’s data security systems, but require substantial resources to defend and may affect customer satisfaction and behavior.” U.S. Bank reports no specific losses attributed to DDoS, but it states: “Attack attempts on the company’s computer systems are increasing, and the company continues to develop and enhance its controls and processes to protect against these attempts.” Other DDoS Comments Here is what the other institutions reported about DDoS attacks suffered in 2012: Chase: “The firm and several other U.S. financial institutions continue to experience significant distributed denial-of-service attacks from technically sophisticated and well-resourced third parties which are intended to disrupt consumer online banking services. The firm has also experienced other attempts to breach the security of the firm’s systems and data. These cyber-attacks have not, to date, resulted in any material disruption of the firm’s operations, material harm to the firm’s customers, and have not had a material adverse effect on the firm’s results of operations.” BofA: “Our websites have been subject to a series of distributed denial of service cybersecurity incidents. Although these incidents have not had a material impact on Bank of America, nor have they resulted in unauthorized access to our or our customers’ confidential, proprietary or other information, because of our prominence, we believe that such incidents may continue. Although to date we have not experienced any material losses relating to cyber-attacks or other information security breaches, there can be no assurance that we will not suffer such losses in the future.

” CapOne: “Capital One and other U.S. financial services providers were targeted recently on several occasions with distributed denial-of-service attacks from sophisticated third parties. On at least one occasion, these attacks successfully disrupted consumer online banking services for a period of time. If these attacks are successful, or if customers are unable to access their accounts online for other reasons, it could adversely impact our ability to service customer accounts or loans, complete financial transactions for our customers or otherwise operate any of our businesses or services online. In addition, a breach or attack affecting one of our third-party service providers or partners could impact us through no fault of our own. Because the methods and techniques employed by perpetrators of fraud and others to attack, disable, degrade or sabotage platforms, systems and applications change frequently and often are not fully recognized or understood until after they have been launched, we and our third-party service providers and partners may be unable to anticipate certain attack methods in order to implement effective preventative measures. Should a cyber-attack against us succeed on any material scale, market perception of the effectiveness of our security measures could be harmed, and we could face the aforementioned risks. Though we have insurance against some cyber-risks and attacks, it may not be sufficient to offset the impact of a material loss event.”

No Mentions of Attacks Among the top 10, the only institutions that do not specifically reference DDoS in their 10-K reports are Morgan Stanley, Bank of NY Mellon and Wells Fargo , a bank that has recently suffered significant online outages. Wells Fargo spokeswoman Sara Hawkins tells BankInfoSecurity that the bank’s online and mobile-banking channels were inaccessible for portions of the day on April 4, when it saw “an unusually high volume of website and mobile traffic … which we believe is a denial of service attack.” Reporting Protocol Doug Johnson , who oversees risk management policy for the American Bankers Association, says banking institutions are required to report all suspicious cyber-activity either through their filings with the SEC or in the Suspicious Activity Reports to the Financial Crimes Enforcement Network , a bureau of the U.S. Department of the Treasury. All financial institutions, regardless of size, must report SARs to FinCEN, an agency that collects, analyzes and shares financial intelligence. However, only companies with more than $10 million in assets are required to file reports with the SEC. Banking institutions are required to report cyber-attacks in their SEC filings, Johnson says.

“Online banking platforms, obviously, are extremely important to banking retail consumers, and so that would be one of those systems which would be very important to report on a suspicious activity report,” Johnson says. “One thing that is also very important to do is to go and have that conversation with your primary federal regulator, at the field level, to find out what you would do, as an institution, for generalized security breach reporting.” Breach reporting requirements vary from state to state, Johnson adds.

Vulnerability Factor

Vulnerability factor exploits how easy an organization or government establishment is vulnerable to cyber-attacks. An organization can easily be vulnerable to a denial of service attack or a government establishment can be defaced on a web page. A computer network attack disrupts the integrity or authenticity of data, usually through malicious code that alters program logic that controls data, leading to errors in output.

Professional Hackers to Cyberterrorists

Professional hackers either working on their own or employed by the government or military service can find computer systems with vulnerabilities lacking the appropriate security software. Once found, they can infect systems with malicious code and then remotely control the system or computer by sending commands to view content or to disrupt other computers. There needs to be a pre-existing system flaw within the computer such as no antivirus protection or faulty system configuration for the viral code to work. Many professional hackers will promote themselves to cyberterrorists where a new set of rules govern their actions. Cyberterrorists have premeditated plans and their attacks are not born of rage. They need to develop their plans step-by-step and acquire the appropriate software to carry out an attack. They usually have political agendas, targeting political structures. Cyber terrorists are hackers with a political motivation, their attacks can impact political structure through this corruption and destruction.

They also target civilians, civilian interests and civilian installations. As previously stated cyberterrorists attack persons or property and cause enough harm to generate fear.

Syntactic Attacks and Semantic Attacks

In detail, there are a number of techniques to utilize in cyber-attacks and a variety of ways to administer them to individuals or establishments on a broader scale. Attacks are broken down into two categories, Syntactic attacks and Semantic attacks. Syntactic attacks are straight forward; it is considered malicious software which includes viruses, worms, and Trojan horses.


Viruses are a self-replicating program that can attach itself to another program or file in order to reproduce. The virus can hide in unlikely locations in the memory of a computer system and attach itself to whatever file it sees fit to execute its code. It can also change its digital footprint each time it reproduces making it even harder to track down in the computer.


Worms do not need another file or program to copy itself; it is a self-sustaining running program. Worms replicate over a network using protocols. The latest incarnation of worms make use of known vulnerabilities in systems to penetrate, execute their code, and replicate to other systems such as the Code Red II worm that infected more than 259 000 systems in less than 14 hours. On a much larger scale, worms can be designed for industrial espionage to monitor and collect server and traffic activities then transmit it back to its creator.

On July 12, 2001, a new worm began propagating across the internet. Although the worm did not yet have a name, it was the first incarnation of what was to become known as the “Code Red” worm . This initial version of the worm is commonly referred to as CRv1. On July 19, another variant of the worm, which shared nearly all its code with the first version of the worm, began to spread even more rapidly than its predecessor a week before. The new variant of the Code Red worm was reported to have infected more than 250,000 systems in just nine hours . This variant of the worm is now commonly referred to as CRv2.

The worm scanned the internet, identified vulnerable systems and infected these systems by installing itself. The rate of scanning grew rapidly because each newly installed worm joined others already in existence. Not only did the worm result in defaced web pages on the systems it infected, but its uncontrolled growth in scanning resulted in a decrease of speed across the internet—a denial of service attack—and led to widespread outages among all types of systems, not just the Microsoft Internet Information Server (IIS) systems it infected directly. On August 4, a new worm exploited the same vulnerability in Microsoft IIS web server as the original Code Red worm . Even though it shared almost no code with the first two versions of the original worm, it was named Code Red II simply because it contained the name in its source code and exploited the same vulnerability in the IIS indexing service. In addition to the original Code Red and the Code Red II worms, there are other possible variants of the worm.

Code Red’s Affect in Both Private Industry and the Government

As a result of the Code Red worm’s rapid spread across the internet, businesses and individuals worldwide experienced disruptions of their internet service. Qwest, the Denver-based telecommunications corporation, which provides DSL services to approximately 360,000 customers throughout the western and midwestern U.S., is being asked to refund fees to customers as a result of service interruptions due to the denial of service caused by the Code Red worm. In addition, the Washington state Attorney General has asked Qwest to pay these customers, some of whom claim the outage cost them thousands of dollars in lost sales. However, Qwest says it has no plans at this time to credit customers who were afflicted by the Code Red worm

Will Code Red be the largest and fastest worm to infect the internet? The answer is no.Code Red is just a shot across the bow. The potential exists for even greater damage from worms that will spread faster and do far more damage that Code Red did.

Previously released worms have required at least several hours to spread and become known, giving system and network administrators sufficient time to recognize the potential threat and take measures to mitigate the damage. Imagine a worm that could attack—not just in a matter of hours—but in a matter of minutes, as Nicholas C. Weaver from the University of California at Berkeley Computer Science Department suggests in his scenario and analysis entitled “Warhol Worms,” based on Andy Warhol’s statement that everyone will have 15 minutes of fame

Trojan Horses

A Trojan horse is designed to perform legitimate tasks but it also performs unknown and unwanted activity. It can be the basis of many viruses and worms installing onto the computer as keyboard loggers and backdoor software. In a commercial sense, Trojans can be imbedded in trial versions of software and can gather additional intelligence about the target without the person even knowing it happening. All three of these are likely to attack an individual and establishment through emails, web browsers, chat clients, remote software, and updates.Semantic attack is the modification and dissemination of correct and incorrect information. Information modified could have been done without the use computers even though new opportunities can be found by using them. To set someone into the wrong direction or to cover your tracks, the dissemination of incorrect information can be utilized.There were two such instances between India and Pakistan and Israel and Palestine that involved cyberspace conflicts. India and Pakistan were engaged in a long-term dispute over Kashmir which moved into cyberspace. Pro-Pakistan hackers repeatedly attacked computers in India. The number of attacks has grown yearly: 45 in 1999, 133 in 2000, 275 by the end of August 2001  In the Israel-Palestine conflict cyber attacks were conducted in October 2000 when Israeli teenagers launched DOS attacks on computers owned by Palestinian terrorist organizations Hezbollah and Hamas. Anti-Israel hackers responded by crashing several Israeli web sites by flooding them with bogus traffic.

DDOS- Biggest Cyber Attack In History

Hundreds of thousands of Britons are unsuspecting participants in one of the internet’s biggest cyber-attacks ever – because their broadband router has been subverted.

Spamhaus, which operates a filtering service used to weed out spam emails, has been under attack since 18 March after adding a Dutch hosting organisation called Cyberbunker to its list of unwelcome internet sites. The service has “made plenty of enemies”, said one expert, and the cyber-attack appeared to be retaliation.

A collateral effect of the attack is that internet users accustomed to high-speed connections may have seen those slow down, said James Blessing, a member of the UK Internet Service Providers’ Association (ISPA) council.

“It varies depending on where you are and what site you’re trying to get to,” he said. “Those who are used to it being really quick will notice.” Some people accessing the online streaming site Netflix reported a slowdown.

Spamhaus offers a checking service for companies and organisations, listing internet addresses it thinks generate spam, or which host content linked to spam, such as sites selling pills touted in junk email. Use of the service is optional, but thousands of organisations use it millions of times a day in deciding whether to accept incoming email from the internet.

Cyberbunker offers hosting for any sort of content as long, it says, as it is not child pornography or linked to terrorism. But in mid-March Spamhaus added its internet addresses to its blacklist.

In retaliation, the hosting company and a number of eastern European gangs apparently enlisted hackers who have in turn put together huge “botnets” of computers, and also exploited home and business broadband routers, to try to knock out the Spamhaus system.

“Spamhaus has made plenty of enemies over the years. Spammers aren’t always the most lovable of individuals, and Spamhaus has been threatened, sued and [attacked] regularly,” noted Matthew Prince of Cloudflare, a hosting company that helped the London business survive the attack by diverting the traffic.

Rather than aiming floods of traffic directly at Spamhaus’s servers – a familiar tactic that is easily averted – the hackers exploited the internet’s domain name system (DNS) servers, which accept a human-readable address for a website (such as and spit back a machine-readable one ( The hackers “spoofed” requests for lookups to the DNS servers so they seemed to come from Spamhaus; the servers responded with huge floods of responses, all aimed back at Spamhaus.

Some of those requests will have been coming from UK users without their knowledge, said Blessing. “If somebody has a badly configured broadband modem or router, anybody in the outside world can use it to redirect traffic and attack the target – in this case, Spamhaus.”

Many routers in the UK provided by ISPs have settings enabled which let them be controlled remotely for servicing. That, together with so-called “open DNS” systems online which are known to be insecure helped the hackers to create a flood of traffic.

“British modems are certainly being used for this,” said Blessing, who said that the London Internet Exchange — which routes traffic in and out of the UK — had been helping to block nuisance traffic aimed at Spamhaus.

The use of the DNS attacks has experts worried. “The No 1 rule of the internet is that it has to work,” Dan Kaminsky, a security researcher who pointed out the inherent vulnerabilities of the DNS years ago, told AP.

“You can’t stop a DNS flood by shutting down those [DNS] servers because those machines have to be open and public by default. The only way to deal with this problem is to find the people doing it and arrest them.”

East vs West: China and United States

Within cyberwarfare, the individual must recognize the state actors involved in committing these cyber-attacks against one another. The two predominant players that will be discussed is the age-old comparison of East versus West, China’s cyber capabilities compared to United States’ capabilities. There are many other state and non-state actors involved in cyberwarfare, such as Russia, Iran, Iraq, and Al Qaeda; since China and the U.S. are leading the foreground in cyberwarfare capabilities, they will be the only two state actors discussed.


China’s People’s Liberation Army (PLA) has developed a strategy called “Integrated Network Electronic Warfare” which guides computer network operations and cyberwarfare tools. This strategy helps link together network warfare tools and electronic warfare weapons against an opponent’s information systems during conflict. They believe the fundamentals for achieving success is about seizing control of an opponent’s information flow and establishing information dominance. The Science of Military and The Science of Campaigns both identify enemy logistics systems networks as the highest priority for cyber-attacks and states that cyberwarfare must mark the start if a campaign, used properly, can enable overall operational success. Focusing on attacking the opponent’s infrastructure to disrupt transmissions and processes of information that dictate decision-making operations, the PLA would secure cyber dominance over their adversary. The predominant techniques that would be utilized during a conflict to gain the upper hand are as follows, the PLA would strike with electronic jammers, electronic deception and suppression techniques to interrupt the transfer processes of information. They would launch virus attacks or hacking techniques to sabotage information processes, all in the hopes of destroying enemy information platforms and facilities. The PLA’s Science of Campaigns noted that one role for cyberwarfare is to create windows of opportunity for other forces to operate without detection or with a lowered risk of counterattack by exploiting the enemy’s periods of “blindness,” “deafness” or “paralysis” created by cyber-attacks. That is one of the main focal points of cyberwarefare, to be able to weaken your enemy to the full extent possible so that your physical offensive will have a higher percentage of success.

The PLA conduct regular training exercises in a variety of environments emphasizing the use of cyberwarfare tactics and techniques in countering such tactics if it is employed against them. Faculty research has been focusing on designs for rootkit usage and detection for their Kylin Operating System which helps to further train these individuals’ cyberwarfare techniques. China perceives cyberwarfare as a deterrent to nuclear weapons, possessing the ability for greater precision, leaving fewer casualties, and allowing for long ranged attacks.

United States

In the West, the United States provides a different “tone of voice” when cyberwarfare is on the tip of everyone’s tongue. The United States provides security plans strictly in the response to cyberwarfare, basically going on the defensive when they are being attacked by devious cyber methods. In the U.S., the responsibility of cybersecurity is divided between the Department of Homeland Security, the Federal Bureau of Investigation, and the Department of Defense. In recent years, a new department was created to specifically tend to cyber threats, this department is known as Cyber Command. Cyber Command is a military subcommand under US Strategic Command and is responsible for dealing with threats to the military cyber infrastructure. Cyber Command’s service elements include Army Forces Cyber Command, the Twenty-fourth Air Force, Fleet Cyber Command and Marine Forces Cyber Command. It ensures that the President can navigate and control information systems and that he also has military options available when defense of the nation needs to be enacted in cyberspace. Individuals at Cyber Command must pay attention to state and non-state actors who are developing cyberwarfare capabilities in conducting cyber espionage and other cyber-attacks against the nation and its allies. Cyber Command seeks to be a deterrence factor to dissuade potential adversaries from attacking the U.S., while being a multi-faceted department in conducting cyber operations of its own.

Three prominent events took place which may have been catalysts in the creation of the idea of Cyber Command. There was a failure of critical infrastructure reported by the CIA where malicious activities against information technology systems disrupted electrical power capabilities overseas. This resulted in multi-city power outages across multiple regions. The second event was the exploitation of global financial services. In November 2008, an international bank had a compromised payment processor that allowed fraudulent transactions to be made at more than 130 automated teller machines in 49 cities within a 30-minute period. The last event was the systemic loss of U.S. economic value when an industry in 2008 estimated $1 trillion in losses of intellectual property to data theft. Even though all these events were internal catastrophes, they were very real in nature, meaning nothing can stop state or non-state actors to do the same thing on an even grander scale. Other initiatives like the Cyber Training Advisory Council were created to improve the quality, efficiency, and sufficiency of training for computer network defense, attack, and exploitation of enemy cyber operations.

On both ends of the spectrum, East and West nations show a “sword and shield” contrast in ideals. The Chinese have a more offensive minded idea for cyberwarfare, trying to get the pre-emptive strike in the early stages of conflict to gain the upper-hand. In the U.S. there are more reactionary measures being taken at creating systems with impenetrable barriers to protect the nation and its civilians from cyber-attacks.

Infrastructures as Targets

Once a cyber-attack has been initiated, there are certain targets that need to be attacked to cripple the opponent. Certain infrastructures as targets have been highlighted as critical infrastructures in time of conflict that can severely cripple a nation. Control systems, energy resources, finance, telecommunications, transportation, and water facilities are seen as critical infrastructure targets during conflict. A new report on the industrial cybersecurity problems, produced by the British Columbia Institute of Technology, and the PA Consulting Group, using data from as far back as 1981, reportedly has found a 10-fold increase in the number of successful cyber-attacks on infrastructure Supervisory Control and Data Acquisition (SCADA) systems since 2000. This was just one example that shows how easy it is to attack a selected control systems infrastructure and that other infrastructures could be subject to countless cyber-attacks if the vulnerability and opportunity presented itself.

Control Systems

Control systems are responsible for activating and monitoring industrial or mechanical controls. Many devices are integrated with computer platforms to control valves and gates to certain physical infrastructures. Control systems are usually designed as remote telemetry devices that link to other physical devices through internet access or modems. Little security can be offered when dealing with these devices, enabling many hackers or cyberterrorists to seek out systematic vulnerabilities. Paul Blomgren, manager of sales engineering at cybersecurity firm explained how his people drove to a remote substation, saw a wireless network antenna and immediately plugged in their wireless LAN cards. They took out their laptops and connected to the system because it wasn’t using passwords. “Within 10 minutes, they had mapped every piece of equipment in the facility,” Blomgren said. “Within 15 minutes, they mapped every piece of equipment in the operational control network. Within 20 minutes, they were talking to the business network and had pulled off several business reports. They never even left the vehicle.”This was done by simple civilians working at that company, given there was no password, if a cyberterrorist was able to break in and gain all the information, it would catastrophic.


Energy is seen as the second infrastructure that could be attacked. It is broken down into two categories, electricity and natural gas. Electricity also known as electric grids power cities, regions, and households; it powers machines and other mechanisms used in day-to-day life. Using U.S. as an example, in a conflict cyberterrorists can access data through the Daily Report of System Status that shows power flows throughout the system and can pinpoint the busiest sections of the grid. By shutting those grids down, they can cause mass hysteria, backlog, and confusion; also being able to locate critical areas of operation to further attacks in a more direct method. Cyberterrorists can access instructions on how to connect to the Bonneville Power Administration which helps direct them on how to not fault the system in the process. This is a major advantage that can be utilized when cyber-attacks are being made because foreign attackers with no prior knowledge of the system can attack with the highest accuracy without drawbacks. Cyber-attacks on natural gas installations go much the same way as it would with attacks on electrical grids. Cyberterrorists can shutdown these installations stopping the flow or they can even reroute gas flows to another section that can be occupied by one of their allies. There was a case in Russia with a gas supplier known as Gazprom, they lost control of their central switchboard which routes gas flow, after an inside operator and Trojan horse program bypassed security.[13]


Financial infrastructures could be hit hard by cyber-attacks. There is constant money being exchanged in these institutions and if cyberterrorists were to attack and if transactions were rerouted and large amounts of money stolen, financial industries would collapse and civilians would be without jobs and security. Operations would stall from region to region causing nation-wide economical degradation. In the U.S. alone, the average daily volume of transactions hit $3 trillion and 99% of it is non-cash flow.[14] To be able to disrupt that amount of money for one day or for a period of days can cause lasting damage making investors pull out of funding and erode public confidence.


Cyber-attacking telecommunication infrastructures have straightforward results. Telecommunication integration is becoming common practice, systems such as voice and IP networks are merging. Everything is being run through the internet because the speeds and storage capabilities are endless. Denial-of-service attacks can be administered as previously mentioned, but more complex attacks can be made on BGP routing protocols or DNS infrastructures. It is less likely that an attack would target or compromise the traditional telephony network of SS7 switches, or an attempted attack on physical devices such as microwave stations or satellite facilities. The ability would still be there to shut down those physical facilities to disrupt telephony networks. The whole idea on these cyber-attacks is to cut people off from one another, to disrupt communication, and by doing so, to impede critical information being sent and received. In cyberwarfare, this is a critical way of gaining the upper-hand in a conflict. By controlling the flow of information and communication, a nation can plan more accurate strikes and enact better counter-attack measures on their enemies.


Transportation infrastructure mirrors telecommunication facilities; by impeding transportation for individuals in a city or region, the economy will slightly degrade over time. Successful cyber-attacks can impact scheduling and accessibility, creating a disruption in the economic chain. Carrying methods will be impacted, making it hard for cargo to be sent from one place to another. In January 2003 during the “slammer” virus, Continental Airlines was forced to shut down flights due to computer problems. Cyberterrorists can target railroads by disrupting switches, target flight software to impede airplanes, and target road usage to impede more conventional transportation methods.


Water as an infrastructure could be one of the most critical infrastructures to be attacked. It is seen as one of the greatest security hazards among all of the computer-controlled systems. There is the potential to have massive amounts of water unleashed into an area which could be unprotected causing loss of life and property damage. It is not even water supplies that could be attacked; sewer systems can be compromised too. There was no calculation given to the cost of damages, but the estimated cost to replace critical water systems could be in the hundreds of billions of dollars. Most of these water infrastructures are well developed making it hard for cyber-attacks to cause any significant damage, at most, equipment failure can occur causing power outlets to be disrupted for a short time.

Preparing for the (Inevitable?) DDoS Attack

The current landscape of means, motives, and opportunities to execute distributed denial of service (DDoS) attacks makes any organization a more likely target than you might imagine.

Open-source attack tools are easy to find. Acquiring the capacity to execute a DDoS attack is almost a trivial concern for state-sponsored actors or criminals, who can lease an attack botnet or build their own through a malware distribution campaign. And it isn’t hard to recruit volunteers to coordinate attacks designed to protest pending legislation or social injustice.

Every organization today is a potential target, and it is essential for both technology and business leaders to consider how they would deal with a DDoS attack. Here are four key steps.

Assess risks

You may conclude that a nation-state could not benefit from disrupting your online operations, but could a crime gang decide that your company is a candidate for extortion? Do your products or services make you a potential target for hate or protest groups? Has your business attracted sufficient recent attention to make you a target for groups seeking notoriety?

If the answer is yes, you will need to assess the likelihood of your organization becoming a DDoS target. Guides like this Forrester whitepaper (commissioned by VeriSign) can help you calculate the probability of an attack, service disruption losses, and the costs of shoring your defenses and responding to an attack.

Develop an action plan

Once you have assessed the risks, consider gathering IT and other personnel to plan how to monitor, respond to, recover from, and make public disclosures about a DDoS attack. If you engage in social media, include parties responsible for your messaging. They are best positioned to monitor public opinion. They can help you distinguish between a DDoS attack and a traffic spike resulting from successful or opportune messaging. They are also the logical candidates to prepare (with legal counsel) and deliver any statements you may issue after an attack.

As you formulate an action plan, consider the forms of monitoring that will help you adopt an early response. Consider, too, how you will restore service or data. Determine what aspects of the plan you will take on yourself and what aspects will require aid from external parties.

The IETF paper RFC 4732 is a good place to start familiarizing yourself with DDoS techniques and mitigation approaches. SSAC advisories and global DDoS threat discussion threads are also valuable resources.

Reports from sources like Squidoo and Arbor Networks describe practical techniques such as rate limiting (e.g., source, connection), address space or traffic type filtering, and Anycast routing.

Talk to outside experts

If you’ve identified external parties to assist in your response and mitigation efforts, let them know in advance what you’d like them to do if you come under attack. Discuss what intelligence you will share, what you would like them to share, how you will exchange or transfer sensitive information, and what their assistance will cost. Discuss how and when you would contact law enforcement agencies, the press, or customers and what your disclosure process will entail. Exchange emergency and business contact information.

Hope for the best, plan for the worst

DDoS attacks have achieved sustained rates of 65 Gbit/s, so even the best preparations may not prevent a disruption of service. But preparing your defense strategy in advance will shorten your response time during the attack. After the attack, these preparations will help you collect information for a postmortem, so your team (and external parties) can learn from the event and adjust your response.

DDoS mitigation is challenging. There’s no shame in outsourcing. If you invest the time to research DDoS defense only to conclude it’s more than your organization can handle, the time you’ve invested is still worthwhile. It will help you make an informed, calm choice from among the available DDoS mitigation services.

An alternative to an on-premise treatment to DDoS protection is in-the-cloud services. An ISP, network operator, or a third-party provider with large-enough capacity can provide such a service.

Essentially, an in-the-cloud DDoS protection service means that packets destined for an

organization (in this case, the end customer of the service) is first sent through an Internet

scrubbing center, where bad traffic like DDoS packets is dropped and the cleansed traffic is then delivered.

Large attacks are a rare event, but dealing with them requires specialized skills, technology, and bandwidth — yet there is no competitive advantage in maintaining those capabilities in-house if they are available from a service provider. The in-the-cloud DDoS mitigation service admittedly needs a substantial infrastructure, with adequate bandwidth and capacity to deal with traffic from multiple customers. But once the infrastructure is built, the service provider can share the skills and capacity across many clients, without clients having to build out their on-premise capacity. There are several advantages performing DDoS mitigation in the cloud:

• The service provider has a broad view of the Internet traffic across multiple clients and

networks that it can learn from and apply mitigation to. For example, by looking across

multiple clients’ traffic, the service can quickly recognize malicious sources that participate

in DDoS activities. As a result, this type of DDoS detection is much more effective and

timely than any end user organization can do standalone.

• By virtue of sharing the service, the costs should be lower and the service better than a goit- alone effort.

• The end user organization need not invest any on-premise resources, either capital or

operational, to deal with traffic that is not wanted in the first place. The service requires only an ongoing service expense.

• The scrubbing center would typically have core Internet connectivity and therefore has a large capacity to deal with traffic, much larger than a typical enterprise network. This means that it can deal with attacks larger than any single user organization can handle.

• By virtue of being a service, the service provider can be easily swapped out for another if

the client’s needs change.

These attributes of an in-the-cloud DDoS service are a great example of the industry buzz around the concept of cloud computing or cloud services. DDoS mitigation in the cloud is a virtual extension of one’s enterprise infrastructure, which handles a particular networking and security function.


The Death of Beta Testing

The Death of Beta Testing

Short release cycles, continuous deployment, automatic updates, and a fear of social media tattling on beta defects are causing vendors to forgo beta testing and find new ways to get useful user feedback.

Many QA engineers  have had a love/hate relationship with beta testing. “It’s expensive to administer and doesn’t give me useful information,” said one.

“Too many bugs slip through this so-called ‘beta testing,’ so I have to test the whole app in the lab anyway,” said another, “but if Marketing wants it, I can’t stop ’em.

On the other hand, some organizations continue to see value in beta testing:

“We need to plan for three weeks of beta phase to make sure we get better coverage on use cases and environments that we can’t test against ourselves.”

Used judiciously, beta testing programs can be valuable, but modern software development practices challenges the whole notion of beta testing.

How do you fit beta testing cycles in already compressed release cycles or with frequent releases?

And as user reviews and ratings of applications become more transparent through social and app store review channels, the definition of “app quality” is slowly morphing from functional correctness to user-perceived value.

Doesn’t that change the entire premise of a beta testing program? There are many problems associated with traditional beta testing: Beta testing often generates too much noise (that is, feedback) that is not accurate and not actionable.

Inconsistent participation — too much or too little — often administered with poor processes for collecting and analyzing feedback. Not all use cases get covered, so bugs slip through.

Good catches but insufficient information: Even when bugs are identified, the reports are often not useful because they lack sufficient information to reproduce the defect. Delay: Beta testing slows the release cycle by having a dedicated phase before the production release. In addition to these problems, several modern deployment practices are making beta testing less attractive.

Replacing Beta Testing These modern deployment practices include everything from lean development, which favors small batch releases that eschew the phased model of development, to deployment methods that enable apps on mobile and desktop platforms to be updated automatically.

In addition, the following trends are pressuring beta testing: Dogfooding: When staff at a company test their own software internally before releasing by using it day-to-day, whether for work or pleasure, it helps identify issues early without the embarrassment and brand damage of a faulty public release. When the developers themselves are the initial users, the user-feedback loop is immediate, resulting in software with better quality and utility. However, depending on the user profile for this “dogfooding,” such programs can encounter similar problems to traditional beta programs — users are often not professional testers, bug reporting may be inconsistent, and the testing does not cover all use cases (for example, new user registration flows, and the like).

Staged roll-out: This is the most basic approach to modern software deployment in which code is tested and monitored for quality before broad release. It can take several different forms; for a website, a feature may be released to a small number of initial users, while activity is closely monitored. For a mobile app, an application may be initially released only to a small market to monitor quality and feedback. Sometimes the staged roll-out approach is a “beta program in disguise” — variations on the actual execution can put this closer to a traditional beta program. Partial roll-out: This is similar to a staged roll-out: A large, clustered system deploys new code to a small fraction of servers. There is automated, active monitoring of those servers, and if anything goes wrong, the “immune system” detects the problem, and automatically rolls back offending changes.

Testing in production (TiP): This practice — testing after a product is put into production — is a controversial topic among QA professionals. It can be complementary to up-front testing or used as a means to shift the timing of quality testing from before to after deployment.

Dark launch: Facebook popularized this approach with the launch of their chat service. Without revealing the feature in the UI, their Web application silently generated load to the chat server, simulating the load the service had to process, readying the infrastructure before the real launch.

Traditional beta testing continues to have a place for certain scenarios, such as when the cost of a buggy release and deploying a fix is very high. Beta programs are also useful when they can work as an early seeding program. (Gamers, for example, love being invited to betas.)

Beta Programs in the Modern Age In the new world of continuous deployment and app stores, companies would do well to re-examine the focus and the goal of beta programs: moving the “functional testing in the wild” burden from only beta testing to including alternative options; using technology to augment the (beta) testers for collecting useful information; and incorporating a quality assurance mentality and associated procedures to areas other than functional correctness.

With the advent of crowdsourced testing, or what is often referred to as “expert-sourcing” because it often utilizes vetted and trained QA professionals, development organizations can now get the benefit of in-the-wild testing without the downside of beta testing’s high noise level. This option offers companies the ability to test pre-deployment under real-world conditions and, in particular, address the difficult problem of mobile device fragmentation: OS versions, mobile carriers, memory and other mobile device configurations, or location diversity.

Typically, the vendors will hire a test company’s members in specific locales to beta test the software and report defects via agreed upon forms and channels. Application instrumentation is a technique that only sophisticated dev shops implemented in the past. New tools including Crashlytics, Apphance, and others allow for crash reporting and user feedback directly from devices via simple instrumentation steps. By enabling testers to send screenshots and reproduction steps with each report and automatically collecting log and other environmental data accompanying bugs or crashes, these tools make the development team’s job much easier by not having to decipher poorly written beta testing bug reports (such as “application crashed suddenly”).

Finally, advanced analytics tools, such as Flurry and Applause, give managers implicit behavioral information and explicit feedback from users of apps in production to make real-time business decisions. Managers use these tools to go beyond app star ratings and drill down on categorized attributes of individual user reviews. As a result, companies can analyze their app’s performance and user sentiment to easily recognize issues that require action. By combining these techniques, development organizations now get useful information about their products in development or in production, and respond intelligently based on real feedback without relying too heavily on traditional beta programs.

So, is beta testing dead? The answer is yes for some organizations, but not for everyone. For companies that want to move fast to remain relevant and keep customers loyal, these new practices help reduce the release cycle by reducing reliance on long beta periods

LeanKit – Future of Visual Management

LeanKit,  is  the core product that they build is software that allows customers to create kanban-style boards for managing teams and projects.  it’s a pretty great tool, and  always working to make it better. But,  a tool is only as good as the process it supports.


Using LeanKit by itself won’t magically make your team better. Using LeanKit to effectively implement Lean-Agile management practices with good technical practices in a healthy, supportive working environment can work wonders. They active participants in the Lean-Agile “community”, going to a lot of events. Of course, part of that is because they have a product to sell. But they are also keenly interested in the latest ideas from community thought-leaders. And they want to see and hear how customers and potential customers are “doing” kanban effectively. That informs thier product development, they incorporate those ideas into how they run LeanKit as a company, and like to share back to the community thier experiences as a kanban team.

Which brings us to the future of visual management. A kanban board works best if the team sees it all the time. A whiteboard with sticky notes does that automatically – at least for the people in the room. It doesn’t work so well for a distributed team. An electronic system like LeanKit solves that problem. But you run the risk of the board becoming a status reporting system that people look at occasionally rather than an always-visible information radiator and hub for collaboration. So how do you get the best of both worlds? They have long thought that the answer lay in interacting with LeanKit via a large screen TV. They have seen customers use giant, smart touchscreens like those from Smart Technologies. They’re awesome products made by a great company and,  think, well worth it if you can afford them. But not every departmental manager can justify that kind of capital investment. So, theyh’ve experimented with retail-available touchscreens like the HP Touchsmarts connected to a normal computer. A very nice option, but still fairly expensive, say $3-4,000 for a screen and computer. More than they felt comfortable recommending to most customers as a real-world actionable solution. A plain old big-screen LCD is great as a pure information radiator. You can get a 50-inch for about $600 on Amazon. Since a big screen will last years, you’re really talking about 50 cents a day in cost. That should be very do-able if you think about the hourly labor-rate for most of the teams doing kanban and/or the value of the products they produce. But what about interactivity. The touchscreens may be expensive but they let you move cards on your LeanKit board – not just view them. we can hook up a computer to the LCD, but the cost for a real PC seems a bit much for a screen we only occasionally interact with. And the user interface is a little clumsy for interacting with the board on the screen. Do you put a desk in front of the screen where you move the mouse? Not practical. Enter the smart TVs For those who haven’t seen one yet, a smart TV combines (obviously) a TV with a decent-but-not-over-the-top computer processor, integrated WiFi and web browsing, and point-and-click/drag-and-drop interaction with the screen. You can get this included in newer TVs or you can buy add-on devices that plug-in to a TV. They ha’ve tried several models and found we liked the LG G2′s as the best example of an integrated device and the Sony Internet Player with Google TV as the best of the add-on options. The integrated device has the benefit of uber-simplicty. Buy it. Hang it on the wall. Plug it in. Go. And they’re not too expensive. About $1,500 for the 55-inch. They ha’ve found, however, that they prefer the Sony add-on device. First, they’re definitely cheaper, about $150 plus the TV. So $800 total cost using the Panasonic 50-inch  mentioned above. They also prefer the style of remote that comes with them. The LG’s have a point-and-click Wii-mote style controller. That’s intuitive but a little touchy for fine-grained movements of a mouse. The Sony has more of a touchpad controller, like your laptops only in the palm of your hand. Both controllers have a full QWERTY keyboard on the back. And, even though they are an add-on, all you have to do is plug them into HDMI port of the TV. The remote is even easily programmable to replace the TV remote. The extra install time relative to the LG was measured in minutes. Making things even better, you can connect other peripherals to the TV through the Sony box. In the picture you see with this story we’ve got a Logitech Skype webcam connected to the TV (just a 42-inch in this case, a new 50-inch arrives later this week) through  Sony box. This allows us to have always-on HD video conferencing between  teams in multiple locations, combined with always-on interactive electronic kanban. It cost less than $1,000 per location. We installed it in minutes (minus the TV bracket) without any special skills or tools. The sales and marketing team did this, not the engineers. And you would not believe how much it improves the quality of interaction between remote teams. If your entire team can be in the same room to work together all the time, awesome. . But that’s a luxury most  can’t manage. Distributed teams are reality for most of us. With the latest technology (including LeanKit!) you can retain much more of the experience of being together than ever before. And you can do it easily and cheaply. You probably don’t even need to get permission or get a special budget allocation. Order them from Amazon today. Have them installed in a few days. Start reaping the benefits immediately.

Cloud and QA Environments

Today, organizations are facing a lot of challenges associated with QA environments like unavailability of environments, lack of skills to manage environments, coordinating with multiple vendors who manage these environments, etc. These challenges inhibit the efficiency of QA teams, which eventually impacts the organization’s business. These inherent challenges, along with the cloud evolution, have been catalysts in driving organizations to explore possible cloud adoption for the creation of QA environments. Senior managers will learn about the challenges faced by organizations with regards to traditional QA environments, and the possible benefits that cloud adoption could bring to this space.

Challenges with traditional QA environments

Shared QA environments

Limited hardware assets leads to sharing of resources across different QA teams. The sharing of environments includes applications, middleware, databases, specialized software and testing tools between different groups, leading to delays in testing due to high levels of inter-dependencies and different work priorities amongst the different teams. Shared environments force non-functional testing like performance testing to be carried out in a scaled down QA environment, which lacks the exact simulation of business requirements.

Mismatch and unavailability of infrastructure

Testing often happens in QA environments where the underlying infrastructure does not comply with the recommended hardware configurations. Even if it did, QA environments are often unavailable due to routine infrastructure maintenance activities. The unavailability of QA environments within an organization forces teams to explore external service providers who provision infrastructure, usually with a lock-in period. This obviously increases the cost of the corresponding project, driving down ROI significantly.

Lack of a standard methodology for building QA environments

Non-availability of a cohesive method that includes building, using and managing test environments, significantly restrains the ability of QA teams to respond to business units that need QA environments. Further, non-standard practices lead to multiple cycles for problem identification, analysis and refinement of QA environments.

Lack of skills to manage QA environment

Lack of skilled teams to manage and maintain QA environments will lead to significant effort being spent in setting up QA environments. Skilled resources are expensive, which increases costs and makes it hard to procure.

No centralized team to manage QA environments

Organizations end up having isolated teams which support different QA environments. In certain scenarios, application development teams manage environment and data, while in others it is managed by infrastructure teams. Different teams follow their own processes for environment management. This leads to a lack of single ownership for all QA environments.

High level of multi-vendor coordination

To avail various testing services, test infrastructure tools, hardware, etc., organizations end up dealing with multiple vendors. Dealing with multiple vendors involves setting up stringent operational level agreements (OLA) in order to ensure delivery of the application, on-time and within acceptable costs.

Multi-vendor engagements often require close monitoring of the progress on projects with multiple vendor coordination involving leased environments, leased testing tools, etc. Besides the coordination challenges that get thrown up by this model, the burden on “management time” is immense.

QA methodology remains stagnant as technology evolves

Organizations fail to revise their QA methodologies to match technology evolutions like SOA and cloud computing. This results in validation gaps which lead to defect prone applications going live. With the evolution of technology, more applications are migrating to or are being built on newer technologies like cloud and SOA, which necessitates the need for the organization to have a robust QA methodology in place to test these new-age applications.

Is the cloud the solution?

A separate QA environment dedicated to application validation can address most of the challenges stated above. In the current traditional models, organizations end up owning many hardware and software assets or leasing additional infrastructure from external infrastructure service providers. Both these options increase the CAPEX of the organization which does not go down well in the current economic conditions with decreasing spends in IT. Traditional QA environments involve procurement and leasing which further delays application go-live due to factors like procurement lead time, contract negotiation with external infrastructure provisioning.

Businesses can find means to effectively address QA environment concerns with the adoption of the cloud. In the cloud, organizations will benefit from features like demand provisioning, elasticity, resource sharing, availability and security. Organizations will also be able to move from traditional CAPEX models to OPEX models, leveraging the on-demand and pay-per-use model of computing resources for their testing and QA infrastructure needs. This would result in significant cost savings for the businesses. The pay-per-use model can also help organizations reduce the maintenance overhead and help them focus more on their business rather than spending effort/management time over environment procurement/leasing, environment management and infrastructure vendor management.

With the adoption of the cloud, organizations would also be able to effortlessly bring in centralization of QA, thus putting an end to issues arising due to the lack of standard methodologies for building QA environments and non-availability of skilled resources.

The QA environmental need is the perfect opportunity for organizations to begin their cloud adoption journey before making any decision on moving applications to the cloud. Leveraging cloud for the QA environment needs will help organizations address their test environment challenges as well as help them achieve benefits like shorter release cycles, business flexibility and better business service levels. Businesses need to ensure that they have all the necessary roadmaps and knowledge to help them transform their traditional test infrastructure to a cloud-based test infrastructure.

The current dynamic needs of business and volatile economy have led to organizations demanding CIOs to meet increasing business demands successfully, with shrinking budget allocations. Every CIO has started focusing and analyzing each and every IT division operation, significant investments made and subsequent ROI generated. One of the most significant elements that gets noticed and scrutinized is the QA infrastructure cost. This is because nearly 30 to 50% of servers in the organization are utilized by the QA teams, according to Cloud computing: Innovative solutions for test environments by IBM Global Services. Hence if these assets are underutilized, the investments in them are also underutilized, significantly impacting ROI.

The evolution of cloud has made organizations sit up and start thinking about how they can leverage the advantages of cloud as an infrastructure or as a platform or even as software, to overcome the challenges of today’s dynamic business and IT needs. In Cloud computing: Addressing software QA issues, we discussed the challenges associated with traditional QA environments and how cloud was a solution to overcome these challenges. In this post, we share an in-depth analysis of the various factors and explain the benefits that make QA environment the perfect place for CIOs to begin cloud adoption.

Use case evaluation for cloud adoption

Infosys recently embarked on research which evaluated the popular cloud use cases against parameters like business risk, business value, relative simplicity and cloud technology maturity for cloud adoption. The analysis covers the following cloud use cases in the forms of cloud as a software, platform and infrastructure:

SaaS (Software as a Service): Online collaboration solutions, enterprise applications and    business/industry applications

PaaS (Platform as a Service): Web 2.0 applications, databases and middleware

IaaS (Infrastructure as a Service): Storage, server and networks, production custom applications and QA/DEV environments

The table below rates the typical cloud cases as High (H), Medium (M) or Low (L) against each parameter – business risk, business value, relative simplicity and cloud technology maturity.

By evaluating the ratings for each parameter, we will be able to deduce the most optimal use case for cloud adoption, from an overall perspective.

Table 1: The use case evaluation for cloud adoption

Business risk

The business risk associated with migrating live applications into the cloud right away is quite high for organizations. If a failure occurs, it would have a direct and immediate impact on the business of the organization. However, cloud adoption with QA environments would be far more appropriate for organizations as the business risk associated with such an adoption was found to be comparatively lower when compared with other cloud use cases.

Business value

It is quite evident that business value of cloud adoption is quite high with the SaaS model when it covers enterprise applications such as CRM, ERP, etc. Organizations usually stand to gain immediately with SaaS as they get a ready-to-go market solution with a very short turnaround time. However, as per the table (Table 1), organizations also gain significantly with the SaaS model when leveraged in the form of cloud for their QA environment needs. This is because of the increased asset utilization, reduced proliferation, greater serviceability and greater agility with provisioning that cloud is able to provide for QA/Dev environments.

Relative simplicity or ease of implementation

SaaS and PaaS use cases require integration; secure authentication and secure policy enforcement which increases the complications during implementations. However, cloud adoption in QA environments stands out due to the relative ease in implementation.

Cloud technology maturity

Cloud technology has high maturity levels across:

  • SaaS in the form of online collaboration solutions
  • PaaS in the form of Web 2.0 applications and databases
  • IaaS in the form of storage, server and networks and QA environments

The evolution of SalesForce CRM (form of SaaS), Windows Azure (form of PaaS), Amazon EC2 (form of IaaS) implies that cloud technology is mature because of the dynamic convergence of information technology, business model and consumer experience. These use cases are a good place to begin cloud adoption from a cloud technology maturity perspective.

Overall recommendation

The one use case that stands out distinctly and strongly, across all parameters, is the adoption of cloud in QA/Dev environments. Advantages such as increased asset utilization, reduced proliferation, greater agility in servicing requests and faster release cycle times, position QA environments as the most optimal use case for cloud adoption from an overall recommendation standpoint.

Benefits delivered by cloud-based QA environments

Let us now look into the key benefits delivered by cloud-based QA environments:

Dynamic and scalable provisioning

With cloud-based QA environments, organizations can quickly provision/de-provision virtual machines on demand, drastically reducing the provisioning time from several months to a few minutes. This ability to scale gives organizations an edge with high quality services and diverse QA environment requirements. It also helps business focus on core areas, by reducing the time spent on procurement operations.

Reduced time to market

Test cycles have always been seen as critical paths for release to production. Cloud adoption in a QA environment facilitates faster on-demand provisioning of resources, increase in productivity and shorter lifecycles for application development and testing, which significantly contributes to faster time to market. Interestingly in traditional QA environments, 30% of defects in production phase for an application were primarily due to wrongly configured test environments. Cloud eliminates this, thus reducing time to market immediately.

Greater environment control

With the cloud adoption for QA environments, multiple channels requesting QA environments for various projects are consolidated into a single channel, significantly reducing server and application sprawl. This leads to a better control over the environment.

Reduced TCO and improved resource utilization

The capability to share environments due to virtualization improves resource utilization, thus reducing associated costs of hardware and software licenses. Cloud-based QA environments bring in significant cost savings of almost 50% on IT support costs, helping organizations move from a CAPEX to an OPEX mode.


There is no doubt that the QA environment is an apt place for organizations to begin their cloud adoption journey. It is recommended that organizations explore and evaluate their internal QA environment infrastructure for conversion into a secure private enterprise cloud. In the event of unavailability of internal infrastructure, organizations need to partner and engage with an external cloud service provider, with the ability to provide the infrastructure for QA as a service in a pay-as-you-use model. Cost savings achieved through such infrastructure optimization can be reinvested by the organization into core business projects to bring in the much needed innovation to drive overall enterprise sustainability and market relevance.

Determining the cloud model that best meets your business requirements.

Once organizations have made the decision to take the cloud route for their QA requirements, the next challenge in store is determining the right cloud deployment model which is suitable for their business needs and size. In this third part of our three-part series, we will give decision makers the information they need to evaluate cloud deployment models and choose the ideal model for their organization.

Evaluating your existing QA infrastructure

To properly make a decision about cloud models, an in-depth understanding and evaluation of the existing QA infrastructure against the following parameters is required.

QA infrastructure requirements:

An organization’s demand for QA infrastructure depends on all application requirements, environment needs for different types of testing, the duration of the testing cycles and the frequency of testing in a given calendar year.

Current QA infrastructure availability:

It is recommended that the organization gauges its existing QA infrastructure assets and makes an inventory of all the related hardware and software assets. Then, evaluate the needs and see if the current demand for QA infrastructure can be met with what’s available.

Availability of budget:

It’s important to assess whether an organization is keen on moving from a CAPEX to an OPEX mode for their QA environments, the willingness to allocate budget for investments in the cloud and the budget amount. This factor would play a key role in determining the right cloud deployment model for the organization.

Application release calendar:

The demand for the QA infrastructure also depends on the release calendars for all applications in the organization which would also factor in the shared and dedicated QA environments for some applications.

Evaluation and applicability scenarios of cloud deployment models

After evaluating the current QA infrastructure, look into scenarios that would ideally fit each cloud deployment model. Let’s review the pros, cons and applicability scenarios and organizations ideally deemed fit for each cloud deployment model. There are primarily four kinds of widespread cloud deployment models to explore from an infrastructure perspective, which include private, public, virtual private and the hybrid cloud.

Enterprise private cloud

The enterprise private cloud is essentially a cloud resource pool that is within an organization’s network and firewall. They are created from already-owned and existing hardware and software assets.


  • Optimal utilization of an organization’s existing assets.
  • On-demand provisioning that can be customized to the QA infrastructure needs.
  • Higher security and compliance with regulations and standards since the cloud is setup within the organization’s firewall.
  • The organization can use the time and resources saved from managing the environment, on more important and core business activities.


  • Additional CAPEX would be required to setup a private cloud along with an investment in hardware assets and tools needed for automating cloud provisioning and managing services.

Applicability scenarios

The enterprise private cloud would be deemed fit where organizations:

  • Already have adequate hardware which suffices their current QA infrastructure needs and is underutilized.
  • Would be able to manage future QA infrastructure demands and accommodate all application release cycles with the current availability.
  • Are willing to invest in virtualization, cloud management software, SAN storage if needed and server class machines to manage the cloud resource pool.

Ideal for:

  • Large organizations that have an underutilized QA infrastructure.
  • Small and medium-sized organizations that lack QA infrastructure assets and need them for a longer duration.

Public cloud

A public cloud is a cloud deployment model where the cloud resource pool is outside the organization’s firewall and built using a cloud service provider’s hardware and software assets.


  • On-demand provisioning with no CAPEX involved.
  • No vendor lock-in concerns. 
  • No resources are required to manage the public cloud since the cloud service vendor takes care of the same.


  • Concerns on data privacy, security and compliance with regulations and standards.

Applicability scenarios

The public cloud would be deemed fit where organizations:

  • Do not own any QA infrastructure related hardware assets.
  • Have no intent to make an investment in its QA infrastructure.
  • Is short on resources to manage its QA environments.

Ideal for:

  • Small and medium-sized organizations that do not own any QA infrastructure and have short- term testing requirements.

Virtual private cloud

Virtual private clouds are third party public clouds or segments of public cloud that have additional features for security and compliance.


  • On-demand provisioning with no CAPEX involved.
  • No resources are required to manage the public cloud since the cloud service vendor takes care of it.
  • No vendor lock-in concerns. 
  • Compliance with data security, privacy, standards and regulations is possible by using public cloud instances. These public cloud instances are not shared with other organizations who are also cloud service subscribers with the same vendor.


  • Additional compliances like the SAS 70 validation would be needed from the cloud service provider.

Applicability scenarios

The virtual private cloud would be deemed fit where organizations:

  • Do not own any QA infrastructure related hardware assets.
  • Do not want to make a CAPEX investment for their QA infrastructure.
  • Are short on resources for managing their QA environments.
  • Have the prime responsibility of complying with standards, regulations, data privacy and security.

Ideal for:

  • Organizations of all sizes that do not own any QA-related infrastructure assets but have short term testing requirements, require security and compliance to standards.

Hybrid cloud

Hybrid cloud is a combination of two or more cloud deployment models (which includes private cloud, public cloud and virtual private cloud).


  • Improved utilization of an organization’s existing assets.
  • On-demand provisioning can be customized to the QA infrastructure needs of the organization.
  • All long-term QA environment needs can be managed with the private cloud and short-term/ sporadic QA environment needs that cannot be accommodated with the existing asset can be managed in a public cloud without any additional CAPEX.
  • Data security, privacy, standards and regulations can be complied with, by using private cloud instances and non-critical application testing can be moved into public clouds.


  • Integration between public and private clouds can be a challenge when applications from these types of cloud deployment models need to interact with each other for simulating end-to-end testing scenarios.

Applicability scenario

The hybrid cloud would be deemed fit where organizations:

  • Own hardware assets related to QA infrastructure to a considerable extent.
  • Are willing to invest in virtualization, cloud management software, if needed in SAN storage and server class machine for managing the cloud resource pool.
  • Have standards, regulations, data privacy and security requirements to comply with.
  • Cannot completely manage the future QA infrastructure demands with their available hardware assets.

Ideal for:

  • Large organizations that have the capability to handle majority of their long-term QA infrastructure needs within their own private clouds and have certain short-term/sporadic QA infrastructure needs which can be handled in a public cloud.

Businesses of all sizes can begin their cloud adoption journey with QA environments with a suitable cloud deployment model. It is clearly evident from the evaluation of the different forms of cloud, that while adopting them, long-term resources need to be moved to private clouds, while the short-term and sporadic resources to public clouds in a pay-as-you-use mode, which would help in achieving an effective ROI.

Continuous Delivery

Getting software released to users is often a painful, risky, and time-consuming process.

Continuous delivery is  groundbreaking new book sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers, and operations, delivery teams can get changes released in a matter of hours— sometimes even minutes–no matter what the size of a project or the complexity of its code base.

Jez Humble and David Farley begin by presenting the foundations of a rapid, reliable, low-risk delivery process. Next, they introduce the “deployment pipeline,” an automated process for managing all changes, from check-in to release. Finally, they discuss the “ecosystem” needed to support continuous delivery, from infrastructure, data and configuration management to governance.

 The authors introduce state-of-the-art techniques, including automated infrastructure management and data migration, and the use of virtualization. For each, they review key issues, identify best practices, and demonstrate how to mitigate risks. Coverage includes

• Automating all facets of building, integrating, testing, and deploying software

• Implementing deployment pipelines at team and organizational levels

• Improving collaboration between developers, testers, and operations

• Developing features incrementally on large and distributed teams

• Implementing an effective configuration management strategy

• Automating acceptance testing, from analysis to implementation

• Testing capacity and other non-functional requirements

• Implementing continuous deployment and zero-downtime releases

• Managing infrastructure, data, components and dependencies

• Navigating risk management, compliance, and auditing

Whether you’re a developer, systems administrator, tester, or manager, this book will help your organization move from idea to release faster than ever—so you can deliver value to your business rapidly and reliably.

ThoughtWorks Continuous Delivery

A new perspective – the release process as a business advantage.

Release software on-demand, not on Red Alert.

ThoughtWorks Continuous Delivery transforms manual, disconnected and error-prone processes to make enterprise software releases so fast and assured they are a non-event rather than a Big Event; so well-controlled and automated that release timing can be placed in the hands of business stakeholders. ThoughtWorks Continuous Delivery is a new vision of how systems should be delivered into production: making delivery so responsive, fast and reliable that the deployment pipeline becomes a competitive advantage for the business.

It optimizes at all deployment pipeline elements – code integration, environment configuration, testing, performance analysis, security vetting, compliance checks, staging, and final release – in an integrated manner, so that all fixes and features can make their way from development to release in a near-continuous flow. At any point, you have an accurate view of the deployment pipeline: what’s tested, approved and ready to go; and what’s at any other stage. Releasing what’s ready to go is as straightforward and automated as pressing a button.

Operational, cost and reliability improvements within IT…

  • Faster, safer delivery – removal of waste, risk and bottlenecks. Releases are reliable, routine “non-events”.
  • Increased automation – speed the whole process while improving quality.
  • Exceptional visibility – at all times you know where each individual feature is in the pipeline, and its status.
  • Improved compliance – support for standard frameworks such as ITIL.
  • Collaboration – Test, support, development, operations work with each other as one delivery team.

…Bring new strategic capabilities to the business:

  • Release on demand – The ability to push releases to customers on demand places you first to market when new opportunities arise. Make competitors react to your moves.
  • Build the right thing – Explore new ideas and market test them quickly with much less effort and cost.
  • Continuous connection to customers – Faster releases show your customers you hear them.

Thoughtworks have the expertise and experience within the enterprise to help you make the journey.

Assessments start with your goals and current situation. Through a series of highly collaborative workshops and deep-dives we evaluate your needs, identify gaps and determine the best course of action. The outcome is a roadmap of immediately actionable recommendations. Assessments are conducted onsite and take 1-3 weeks.
Implementations focus on executing a roadmap of technical, process and organizational changes needed. it work side-by-side with you, providing both technical and coaching expertise, evolving you toward integrated Continuous Delivery practices.

its services are customized to your specific needs, but typically include:

  • Automating code, database and configuration deployment to make a reliable, rapid process. Use the same deployment mechanism for all environments.
  • Introducing Continuous Integration to support early testing and feedback on development.
  • Transforming development and operations teams into one delivery team, giving operations a seat at the table throughout the process to ensure operational needs are met.
  • Automating infrastructure and configuration management, along with use of cloud/virtualization to reduce the pain and cost of managing environments, keeping them in consistent and desired states.
  • Building a metrics dashboard and alerts to give automated feedback on the production readiness of your applications every time there is a change – to code, infrastructure, configuration or database.

Continuous Delivery by ThoughtWorker Jez Humble and alumnus Dave Farley sets out the principles and practices that enable rapid, incremental delivery of high quality, valuable new functionality.

The pattern that is central to the continuous delivery is deployment pipeline. Deployment pipeline is automated implementation of applications build, deploy, test, and release process. Automated deployment process shall be used by everybody and it should be the only way to deploy software. This ensures the deploy scripts works when needed. Same scripts shall be used in every environment

Agile Practices in Large organization

The ability to scale agile software development to large organizations has always had skeptics. Typical arguments are that agile works for small software development groups but not for large ones. Or, that they use outsourcing providers with fixed price contracts for software development and an agile methodology does not provide the discipline for them to fulfill contracts without a great deal of specification and design upfront.

Scaling agile software development to large organizations is still possible if enough attention is paid to:

  • Scaling agile practices – Understanding agile practices and making sure that the rest of the organization also does the same.
  • Scaling agile work – Organizing work and people appropriately for scaling agile properly.

Scaling agile practices to a large organization
Lean thinking guides agile practices significantly. The sources of many ideas in lean thinking are the Toyota production system (TPS) and the House of Quality that many lean companies practicing lean thinking use. The main principle in lean thinking is the idea that people are inherently responsible, capable of self-organization and learning, and do not need active management from supervisors. The other main idea in lean thinking is continuous improvement. Continuous improvement is best practiced by software development people that actually do the work. The Japanese technique of Gembutsu or “Go See” is

The principle is that each software development in each product or project environment is different and that methodologies and practices need to be tailored by the people who do the work after observing what is happening with the project closely for a while.

Reduction of waste is another strong agile practice that needs to be understood clearly and scaled in a large organization. Duplication of code in two different software projects is pretty common and well-known. Teams waiting for requirements documents to be complete and approved, waiting for design documents for coding to start, waiting for completed code for testing to start are all well-known wastes due to delays. Many processes like the stage-gate and other product management practices introduce their own delays. Software development teams waste time twiddling their thumbs while they are waiting.

For success, misconceptions about scaling with agile in large organizations need to be addressed. Agile does not mean there should be no documentation. Agile does not mean you are not disciplined. Agile does not mean no planning. The Agile Manifesto lays out a continuum of emphasis – individuals and interactions over processes and documentation. It just means that individuals and interactions are more important than any one process or extensive documentation, but not unimportant. Removing misconceptions is very important for agile to scale because such misconceptions have the potential to derail adoption.

Scaling agile work to a large organization
Organizing agile work to a large organization consists of two major areas that need to be addressed. Tackling one without the other is ineffective and counterproductive. These are organizing the work to be done and organizing people.

Organizing work traditionally has been done along some internal divisions like product divisions (personal tax preparation products, corporate tax preparation software products, for example) or functional divisions (user interface group, database management group, middleware group, etc.) or based on platforms (Windows, Windows Mobile, Unix, etc).

All of these ways of organizing work waste enormous amounts of time in unused talents and waste of time in waiting. In practice, there are almost always delays in handoffs and people are waiting for someone to give them something so that they can continue their own work. The UI group may be waiting for the middleware group to finish their designs. There could be enormous duplication of code – the two product divisions could be writing the same code to do the same thing without realizing it. There could be very good programmers that are good in UI design, coding and database design and implementation. The silo method of organizing work leaves a lot of talent untapped and unused.

It is better to organize work around requirements or features. Requirement areas will have their own requirement area owners that report to the product owner. Requirement areas could be IP protocols or performance or device support in the case of a telecommunication software product, for example.

Or, they could be organized around features such as downloading device data or batch download of data in the case of an embedded hardware/software product. In both cases, you will see that teams address entire set of coding functions – coding, UI design and development, and database design and development also.

Organizing people for scaling agile requires a lot of organizational change. It needs to be reflected in the policies and procedures of the company and needs to be adopted and used on a daily basis diligently for agile to be effective. Just adopting the superficial ways or organizing work without addressing these will be ineffective. Organizing people needs to follow the principles of empowerment of people, self-organization and self-management.

Reporting hierarchies need to be flattened first and the reporting spans should be larger. If people are empowered and self-managed, you need fewer managers to oversee their work. Managers need to be coaches or subject matter experts. Multi-skilling and job rotation needs to be built into the system. Software engineers may need to be experts at coding, architecture, design, database design and development and testing. Job titles prevent people from utilizing their full potential and contribute their best to the organization. Since now teamwork needs to be emphasized, reward structures need to be modified and job titles get in the way of team work. Job titles need to become generic and pay is tied to seniority and experience and automatic. These are pretty radical changes but without them re-organizing work alone may not help agile scale. These changes enable employees to be more proactive in taking on responsibilities and self-management and contribution.

Agile scaling, distributed and offshore software development
Agile scaling is really difficult with distributed and offshore software development. Many ideas that work when software development is centralized break down when the teams are distributed or done offshore.

Cultural differences, time zone differences do not pose big problems when software development is centralized. However, they become big problems in scaling agile development when teams are distributed, and some are offshore. The key here is to adapt and modify agile practices appropriately so that they work properly. A Daily Standup is possible if the entire team is in the same building or campus. If they are distributed across the globe only a weekly standup may be more practical and advisable. Clients or product owners may not be available for a daily standup at odd hours (because of time zone differences) and a weekly standup may be the only feasible solution.

Another way to address this is to use the distributed or offshore team as a self-contained requirement area group or a feature group. Communication is the #1 problem with distributed or offshore teams. There are no easy answers there except to use many communication mechanisms as possible – Skype or daily video conference, weekly team meetings, personal visits onsite by offshore teams, and personal visits by the client, onshore teams to the offshore location, at least every quarter or so.

Agile software development works in the small and can also work in the large, if approached carefully and many organizational changes and approaches are diligently made, and followed. Understanding and infusing the principles behind agile practices goes a long way in making scaling agile to large organizations successful. The keys are in not adopting only the superficial rituals but really adapt agile practices to the situation at hand, one organization at a time. Every organization and every software development project has its own unique aspects and a single magic bullet may not work in all cases. The underlying principle in agile is this flexibility and adaptation rather than blindly following a single set of prescriptions!

Software Design Patterns for Information Visualization

mmohamme-12-10-11-0-58-Visualization – Design pattern paper

The network depicts interactions between software design patterns, providing a map of how the various design patterns apply or mutually reinforce each other. Patterns with italicized text are taken from Gamma et al’s collection of general design patterns; those with a standard typeface are visualization-specific patterns introduced in this paper.

Despite a diversity of software architectures supporting information visualization, it is often difficult to identify, evaluate, and re-apply the design solutions implemented within such frameworks. One popular and effective approach for addressing such difficulties is to capture successful solutions in design patterns, abstract descriptions of interacting software components that can be customized to solve design problems within a particular context. Based upon a review of existing frameworks and experiences building visualization software, there are a series of design patterns for the domain of information visualization. Structure, context of use, and interrelations of patterns spanning data representation, graphics, and interaction has to be viewed in detail. By representing design knowledge in a reusable form, these patterns can be used to facilitate software design, implementation, and evaluation, and improve developer education and communication.