By C. Warren Axelrod, Ph.D.


Would we achieve higher standards for software and data security if contractors and subcontractors accepted stringent cybersecurity requirements in software development agreements, vendors signed off on similar requirements in software license agreements, and service providers included cybersecurity components in their offerings? One might expect that contractual provisions would improve the current situation, but we frequently see that results do not turn out as well as anticipated, usually for ignored or unrecognized reasons.

In this article, we will identify cybersecurity risks inherent in developed and acquired software products as well as software-related service issues. We will then describe how to mitigate these risks through contractual terms and conditions. Finally, we will discuss the conditions that may affect agreed-upon contracts.


There are many publications about cybersecurity risks and mitigating them, and quite a few resources about provisions that should be included in general IT agreements. However, there is much less information on translating cybersecurity risk-mitigation requirements into meaningful contractual clauses, negotiating provisions embodied in those clauses, and enforcing those provisions.

The overall risk assessment and contract development and management processes are represented in Figure 1. We will discuss each phase as it applies to software cybersecurity for various categories of software. We will also:

  • Discuss identifying logical and physical software-related risks.
  • Suggest various mitigation strategies.
  • Translate those strategies into cybersecurity requirements.
  • Describe the costs and benefits of each recommendation.
  • Prioritize the risks based on cost-benefit analyses.

Each of these steps is necessary so that customers can enter into meaningful and productive contract negotiations with prospective suppliers. Once the terms and conditions of the relationship between customer and supplier have been established and agreed upon, they can draft a contract and further negotiate it. Customers and vendors must make sure contractual terms are followed once the contract has been executed. They must also take action when deviations occur.

Different Types of Software

Software is generally program code that produces computer capabilities, including:

  • Creating documents and spreadsheets.
  • Managing financial transactions.
  • Controlling nuclear power plants.

Software is ubiquitous and is continuing to disperse across the world. Indeed, some far-sighted major manufacturing companies, such as General Electric [1], are changing their business models to accommodate and benefit from the emerging software-intensive world.

The various categories of software have been around for more than half a century. They include:

  • Custom-built software programs.
  • Internal development, testing and implementation.
  • External development, testing and implementation.
  • Hybrid internal/external development, testing and implementation.
  • Pre-packaged software products.
  • Commercial off-the-shelf (COTS) software.
  • Government off-the-shelf (GOTS) software.
  • Customized COTS and GOTS.
  • Open-source software.
  • Embedded software.
  • Software-based services.
  • Direct services.
  • Software as a Service (SaaS) in the cloud.

Different types of contracts apply to each of the above software categories, and these contracts handle cybersecurity in a variety of ways. These depend on the context in which the software is run and their business criticality.

The Importance of Contracts

Those who negotiate and enforce computer contracts are familiar with the all-too-common vendor statement: “We’ll put this contract in a drawer (or file folder) and never reference it again.” Of course, this is often the case. When contractual terms need to be invoked, however, that statement often indicates an adversarial situation. This is also a ploy to make customers feel more comfortable with boilerplate vendor agreements and not push as vigorously as they might otherwise for stiffer contractual terms and conditions.

However, not standing up for certain contract provisions that are important to your organization is a mistake. This approach is commonly used with take-it-or-leave-it contracts, where vendors are not willing to negotiate at all. The above sentence may be accompanied by this reassurance: “Well, our major customers have already agreed to this.” Such assertions may not necessarily be true and should be verified if possible.¹

Contracts serve the very important purpose of mitigating risks — not only for customers but also for suppliers — and of specifying each party’s responsibilities and liabilities when a covered event occurs or when there is a breach of contract.² It is up to each party to anticipate what those risks might be and to specify how to handle them. When it comes to computer software, risks have escalated rapidly over time, with massive changes happening at an accelerating pace during recent years.

Evolving Software Cybersecurity Risks

The software marketplace has changed dramatically over the past several decades. Prior to the 1980s, software licenses were often bundled into overall computer-system purchases.³ You could choose from a limited selection of commercial off-the-shelf products — mostly compilers, utilities and other system software. The other option was building your own applications and system software or retaining third parties that provided software development and system integration services.

A lot of proprietary system and communications software — and some application software products — were managed under all-encompassing take-it-or-leave-it contracts. These were generally nonnegotiable and came from companies like IBM and Digital Equipment Corporation (DEC).⁴

Even when acquiring third-party software from less dominant vendors, the best you could expect was for the source code to be stored in escrow against the possibility that the vendor would go out of business or no longer support the product. In this case, customers would then have to support the product themselves, but at least they would have more readable source code to work with.⁵

In subsequent decades, third-party reviews of vendors’ products, services and facilities became more common. Customers often used these to reassure themselves — and their auditors — about the quality of the product or service. ⁶

On the liability side, software vendors have notoriously avoided responsibility for ensuring that software actually works as intended. At most, they might agree to pay customers up to the purchase price of the product if it failed to live up to expectations. However, they have studiously avoided committing to pay for consequential damages attributable to the software.

This may be changing as more software is used to control safety-critical systems, such as those operating autonomous vehicles,⁷ especially in the wake of a fatal accident involving a Tesla car running on “autopilot.”

With the advent of personal computers and ubiquitous internal and external networks, a new category of software purchaser arose — the individual. It wasn’t that commercial and government entities didn’t continue to make up much of the market by value. They did and still do.

However, we began to see a change in the balance between vendor and customer, where individuals were forced to acquire software products on a take-it-or-leave-it basis. A single person didn’t have any negotiating power and was unable to confront vendors successfully by himself.

Along with the proliferation of distributed computing came concerns about security and privacy. Previously, the main concern had been protecting the confidentiality of data for business reasons.⁸ Personal and business computer systems became victims of increasing numbers of sophisticated attacks in which hackers stole information for espionage, identity theft and fraud. Over time, it became apparent that the vast majority of successful attacks came through the application layer, yet customers had little control over the security of the applications they used.

Today, with the burgeoning use of the cloud, application software is bundled into SaaS (software as a service) and system software (e.g., operating systems) into PaaS (platform as a service) offerings. Because of this, both organizations and individuals remain relatively powerless when it comes to asserting their requirements and desires for higher-quality software.

Custom-built software is another matter. If software is developed in-house, an organization ostensibly has control over the quality and security of that software. Yet building security into applications came late and still has relatively little momentum. Security is often subject to competitive features, marketability, convenience, performance and other factors.⁹

When software is built by contractors, who might further subcontract some of the work, and the developers incorporate commercial and open source products into the system, more problems occur. We don’t know the provenance of each component (see [2]), don’t have adequate oversight of external development shops, and cannot test all components in all expected operational contexts.

Detailing the Risks

Logical and physical computer-related risks have evolved dramatically in the half-century since computers were introduced into common government, business and academic use. The risks have proliferated and become much more complex in the 25 years since the advent of the internet. Relatively speaking, omnipresent cybersecurity risks are a new phenomenon and have only really taken hold in the past decade or so.

In Table 1, we show the progression of pre-internet technologies and how logical and physical risks have changed from one computer era to the next. The model of the highly-centralized, well-guarded data center overseen by specially-trained technicians gave way to more distributed computing. This was frequently operated by staff with less training. Then, when the personal computer was introduced, non-technical individuals controlled the local systems.

Computer networks were initially limited in scope and geography and were supported by relatively few specialists. But as the use of minicomputers and personal computers expanded, so did the interconnections within physical locations and among them. However, they were still clearly specified and known to technical support engineers. Under these circumstances, it was still possible to identify counterparties, negotiate agreements with them, and invoke contractual terms when necessary.

Then, as shown in Table 2, along came the internet. No one knew much about the source of software products being used, especially those embedded within services, and the specific network topologies applicable to particular users and organizations.

As a result, it has become much more difficult to identify responsible counterparties and hold them accountable through the previously effective contractual methods. This has exposed the need to revisit contractual relationships and come up with ways to negotiate and enforce terms and conditions. These are necessary to protect individuals and organizations.

Software Contracts and Cybersecurity

We now examine typical provisions included in software contracts by software category. It is important to realize that software products (or pre-packaged software) are typically licensed rather than purchased, so that the vendor retains ownership. The organization paying for the development usually owns custom-built software, whether internal staff or third parties are developing it. Open source software belongs to everybody or nobody. Yet even as it is freely available, those who use the software have contractual requirements. Embedded software can fall into any of the above categories, but the system or service provider generally handles conditions of use and support. The latter is retained in-house, managed by a third party or situated in the cloud.

Figure 2 illustrates how cybersecurity has been increasingly included in software contracts, for both products and services, over the past several decades and well into the future. The diagram intersperses the various technologies that have emerged over this same period. Considering that the internet did not become popular until the 1990s, the lack of awareness about cybersecurity for many years is quite understandable.

Until about 15 years ago, information security was barely mentioned in software products and software-intensive service acquisition agreements. In the last decades of the 20th century, software agreements focused on the items shown in Table 3, per [7] and [8].

To the extent that it was addressed at all, security was mostly directed at confidentiality by safeguarding the vendor’s intellectual property, such as source code. Buyers had little protection other than (perhaps) having the vendor place the source code in escrow so they could access it if the vendor went out of business or no longer supported the products or services.

While not ideal, working on source code is better than trying to reverse-engineer object code (or machine language). Other platforms (operating systems, system utilities, firmware and hardware) and data (media, devices, formats) issued were generally not considered. Yet these issues could, and did, present situations that were difficult to resolve.13

In 21st-century contracts, inserting sections about security has become the norm. In fact, security has become one of the major factors considered, as evidenced by the explosion of suggested security requirements from researchers, government agencies, industry groups and the like.

Often, when a software manufacturer upgrades its products, it informs customers that they will no longer support the obsolete software after a given date. However, in some cases, the manufacturer will agree to continue to fix any security bugs found for a longer period and may charge for doing so.14

Cybersecurity requirements generally fall into the following categories:

  • Security triad (confidentiality, integrity, availability of data and systems).
  • Privacy.
  • Functionality.
  • Errors and weaknesses.
  • Liability or lack thereof.

Many publications in cybersecurity, supply chain assurance and vendor management treat contracts almost dismissively. Mentions of vendor contracts and service agreements are often covered by a perfunctory reference to contracts as mechanisms to mitigate known security risks.

However, in the ISACA publication “Vendor Management Using COBIT® 5” [7], there is an entire section (chapter 5) on binding documents, which include:

  • Requirements overview.
  • Call for tender (RFP).
  • Vendor contract.
  • SLA (service level agreement).
  • OLA (operating level agreement) and underpinning contracts.
  • Vendor performance reports.

Nevertheless, while the vendor contract includes sections on information confidentiality and intellectual property, there is little reference to cybersecurity as such.

In chapter 6 of the ISACA report [7] “Managing a Cloud Service Provider,” there is significant focus on security and data privacy, with a section on “Security Risk and Threats Related to Operating in the Cloud.” Here we find a discussion on whether the provider or the client assumes data and application security risk. There is a very helpful list under “Risk Factors by Service Model.” Here are some examples of cybersecurity risks:

  • Inadequate patch management.
  • Lack of visibility of technical security measures in place.
  • Insufficient virtual machine security maintenance.
  • Inadequate provider authenticity.
  • Vulnerabilities relating to service architecture.
  • Lack of visibility into software systems development life cycle (SDLC).
  • Inadequate identity and access management.

The “Overview of Threats and Mitigation Actions” section provides some slightly modified responses to these risks as they relate to contractual requirements:

  • Vulnerable access management. “A contractual agreement is necessary to officially determine who is allowed to access the enterprise’s information, naming specific roles for [providers’] employees and external partners.”
  • Unauthorized access. As above plus “All controls protecting the enterprise’s information assets must be clearly documented in the contract agreement or SLA.”
  • Virtual machine security maintenance. “Request [provider’s] internal SLA for … vulnerability management, patch management and release management when new … vulnerabilities are discovered. The SLA must contain detailed specifications about vulnerability classification and actions taken according to the severity level.”
  • Application attacks. As above plus: “… which must align with corporate policies and procedures for similar events.”
  • Collateral damage. “… include the [customer] in [provider’s] incident management process that deals with notification of collateral events … Include contract clauses and controls to ensure that the [customer’s] contracted capacity is always available and cannot be directed to other [customers] without [written] approval.”
  • Asset ownership. “Include terms in the contract with [provider] that ensure that the [customer] remains the sole legal owner of any asset migrated to the [provider, or any asset developed and paid for by the customer].”
  • Asset disposal. “Include terms in the contract that require, upon contract expiration or any event ending the contract, a mandatory data wipe carried out under the [customer’s oversight].”
  • Asset location. “Include terms in the service contract to restrict the moving of [customer information] assets to only those areas known to be compliant with the [customer’s] own regulation.”

There are many additional areas where formal agreement is required. Here is a more comprehensive list of requirements that should be included in software and/or a service agreement:

Risk assessments: Conduct risk assessments prior to and periodically throughout the term of an agreement. If unacceptable risks are encountered, explicit remediation and mitigation processes, as should have been detailed in the agreement, need to be activated. Their successful completion should be certified in writing.

Confidentiality: Ensure that certain categories of data — customer, client and employee personal information, intellectual property and business plans — should be protected to the extent necessary to minimize the risk of access, manipulation, destruction, etc.

Integrity: Ensure that data are not corrupted, destroyed or lost through unauthorized access, intentional or accidental misuse, software errors or hardware failures.

Availability: Ensure that authorized users and designated others (e.g., lawyers investigating evidence) have access to data and processing facilities whenever needed, subject to regular maintenance and other valid reasons for downtime.

Privacy: Meet legal and regulatory requirements for protecting and not disclosing personal data (especially financial, health and lifestyle data), whether accidentally, intentionally or through unauthorized access and use.

Functional capabilities: Ensure the system only performs allowable functions and provides authorized access to specific data. It does NOT perform any other functions, especially those that might lead to unauthorized and nefarious activities by insiders or outsiders.

Nonfunctional capabilities: Ensure that non-functional features — security, scalability, reliability, maintainability and recoverability — have been considered and that terms are included in the service-level agreement (SLA). Acceptable levels of each aspect must be stated explicitly, and remedies when the standards are not met must be spelled out.

Quality assurance: Ensure that the product or service fully meets predetermined functional and nonfunctional criteria and provide written evidence that it has done so.

User acceptance: Make sure the user community generally finds the product or service acceptable from an operational perspective. This might involve testing a prototype or final product for a customized product or service or providing a free or discounted trial period for an off-the-shelf product or service.

Performance: Determine that the product or service can meet pre-specified performance criteria, such as transaction throughput and response time, under various load situations (number of users and transactions processed per second).

Context: Frame other functional and nonfunctional requirements, such as privacy, availability, performance and resiliency, in terms of the product or service’s intended operating context. Ensure any modifications for the particular context that may be required have been specified and included as part of the overall agreement.

Personnel: Specify requirements for all personnel involved in implementing and operating the product or service in terms of number, location, skills and expertise. This includes both the supplier and customer organizations. Ensure appropriate arrangements have been made for training before the software is implemented.

Security errors, weaknesses and vulnerabilities: Verify that any software, either as a product or integrated into a service, does not contain common security defects that have been listed by such entities as OWASP, SANS and MITRE. Provide written proof that any such defects have been addressed and resolved.

Liability: Specify the responsibilities of each party in advance of the agreement if the product or service fails to meet the above criteria. State the remedies and penalties (if any), particularly about direct and consequential damagers.

For more information about these sections and additional areas, refer to the references and suggested reading sections at the end of this article.

Optimizing the Agreement from All Sides

Those who buy and sell software products and software-based services have their own motivations and preferences. Sellers want to minimize their liability and are reluctant to commit to damages beyond the cost of the product or service. They want to maximize long-term net income and shareholder value. If that means doing as little as possible to satisfy the terms of license and service agreements, then so be it — unless they can gain additional revenue to pay for any extra service or liability, of course.

Buyers, on the other hand, want sellers to take on as much liability as possible and fix any issues under a no-charge or minimal-cost warranty.

Clearly the goals of each party are in conflict. In such circumstances, the objective is arriving at an optimal point of agreement. This is often one where there is aggregated “least regret” on the part of all parties involved. Parties involved in the optimization process include not only the direct buyer and seller, but parent organizations, subcontractors and secondary suppliers.

It is becoming much more complex to even identify all the parties. Software design, development, and testing is outsourced, frequently offshore. Additionally, almost all software products and services contain open-source programs, for which a very different set of rules apply.

Attempting to bring all these considerations together in a set of documents can be a daunting process, and the chances of omitting important areas are high. Nevertheless, a software contract or service agreement that contemplates these factors will avoid many potential issues.

Negotiating the Contract

There are several factors influencing the outcome of a negotiation. Perhaps the most significant is the relative negotiating power of both parties, which is a function of relative size, competition and market share. When the difference in relative size is very large (e.g., an individual or a single small company negotiating with a very large software or IT services supplier) the opportunity to achieve desired terms and conditions in contracts is minimal, if not impossible.

Only when individuals and smaller organizations (and sometimes large organizations) band together can they expect to make some headway in negotiations. Otherwise, it is a “take-it-or-leave-it” situation, where the smaller entity, be it a customer or an organization, is forced to accept the terms offered or go elsewhere — if there are alternative products and services.

Enforcing the Contract

No matter what is written in the contract, either party has the choice to pursue a particular infringement of the contract. This decision not only involves the operational and financial impact of an infringement but the relationship between buyer and seller and the litigation appetite of each.

In addition, all parties enter into an agreement with preconceived ideas about the likelihood that there will be a breach of contract — and the likelihood that the aggrieved party will take action. It often takes more than a single transgression to trigger legal action or arbitration. However, it is important for the parties to understand that arguments about enforcement may well arise.

The more specific the agreement in terms of what constitutes a failure to perform (by any party) the better. It is also important to recognize remedies that might satisfy the performance requirements before legal action might be initiated. The preferred form of resolution, legal action or arbitration, should be specified ahead of time in the contract.


The nature of software-related risks has changed radically over the past half-century as computer system and network technologies and their use have evolved from isolated mainframes to massively interconnected mobile devices. During the early days of computing, the number of suppliers was limited usually to just a few (e.g., IBM and AT&T).

While these large companies were difficult to negotiate contractual terms and conditions with, at least you knew who to go to if you had a problem. Today, there are so many intertwined entities involved in even the simplest of products or services that it has become a major task just to identify the players, much less negotiate contracts with them.

In this article, we have identified computer-related risks and documented their evolution as they have morphed into software cybersecurity risks. We have listed some, but not all, of the requirements and verification processes that should be specified in software contracts and service agreements. This will help you reach a point where a reasonable and necessary level of cybersecurity can be achieved.

Disclaimer: The author is not an attorney and does not intend to provide legal advice in this article. Readers should consult with their legal departments or outside lawyers to discuss which terms and conditions should be presented for negotiation with other parties and to develop appropriate contractual wording of such terms and conditions.

Figures and Tables:

Figure 1: The process of developing and enforcing software cybersecurity clauses in contracts ( Click to view image )

Table 1. Logical and Physical Risks by Technology Category in the Pre-Internet Era ( Click to view image )

Table 2. Logical and Physical Risks by Technology Category in the Post-Internet Era ( Click to view image )

Figure 2. Increasing Inclusion of Security in Software and Service Contracts ( Click to view image )

Figure 3. Increasing Proportion of Security and Privacy Sections in Contracts ( Click to view image )

Table 3. Items Included in Contracts in the Late 20th Century ( Click to view image )

References and Notes

  1. The author recalls following up with the claimed customers only to discover that some of them did not even do business with the vendor. Therefore, it is an important part of due diligence to check on even the most casual of claims and to make sure that the people with whom you are dealing are who they say they are.
  2. Researchers frequently take the customers’ viewpoint and fail to consider the needs of vendors adequately. In Axelrod [2], the author does evaluate both sides’ requirements and considers how certain factors, such as the relative sizes and the industry standing of each party, affect the eventual outcome.
  3. Note that customers do not actually purchase and own software products; they license them. This means that customers are not allowed to resell software even if it was originally bundled into equipment that they are selling. Sometimes the software license may be transferred to the new equipment owner, but in other cases, the new owner must license software directly from a vendor. Whereas open-source software is free, it is still subject to user agreements, which the new licensor must execute.
  4. “Even very large companies have been able to make only minor changes to terms and conditions of dominant vendors’ contracts. And in such cases, they might only do so under the threat of withholding payments or cancelling the acquisition, if the latter is indeed feasible.”
  5. If you deal with custom-built software or customized off-the-shelf software, you might find yourself hiring developers or engaging consulting firms with specific knowledge of your particular software. For purely off-the-shelf software, such expertise might not be available at any price. Having to persuade programmers to join your organization under such circumstances can be a particularly grueling experience.
  6. An impartial review is often linked to who paid for the review and the reputation of the reviewing and reviewed entities. The best case is when the reviewer is funded by customers, as with the Consumers Union, and purchases software products anonymously from a retailer. While reviews vendors paid for may be objective and credible, it is natural for some bias to be introduced. Potential customers need to consider these issues.
  7. The cybersecurity aspect of vehicle systems is discussed extensively in a GAO publication [3].
  8. In the early days of computing, companies would worry that others might be able to get access to their customer account lists because those lists could be used by other organizations to steal customers. Organizations were generally not motivated by privacy issues to protect data.
  9. Sometimes, poorly implemented security features further encourage software developers to bypass security. A recent example is the chip-enabled credit card, which appears to have a slower verification and processing system than the magnetic-stripe swipe card it replaces. Some claim that this is more perception than reality.
  10. DevOps is a relatively new term that is defined as “the blending of an enterprise’s applications development and systems operations teams.” DevSecOps further extends this collaborative model to include information security.
  11. As of May 2016, the number of Facebook users stood at 1.65 billion active users worldwide, and the average time spent on Facebook per day is 50 minutes, according to [6]
  12. For details, see
  13. As an example, tax-collecting agencies require information to be retained for specific lengths of time, which may exceed the useful lives of computer systems on which the data are stored and processed.” When a computer system is replaced, such data must be converted to the format of its replacement system. If that is not possible, the former configuration must be recreated so the required data can be extracted.
  14. A situation such as this arose when Microsoft notified customers that Windows NT 4.0 would no longer be supported after a December 31, 2004, deadline. As John Foley [8] wrote on December 22, 2004: “The clock is winding down on Windows NT, the operating system that propelled Microsoft into the computer server market 11 years ago. The last day Microsoft will provide general support for Windows NT Server 4.0 is Dec. 31, after which companies need custom contracts if they want the safety of Microsoft security fixes and other help.”

C. Warren Axelrod, Ph.D.

Click to view image

C. Warren Axelrod, Ph.D., is a senior consultant with Delta Risk, a consultancy specializing in cyber defense, resiliency, and risk management. Previously, Axelrod was the chief privacy officer and business information security officer for US Trust.
He was a co-founder of the FS-ISAC (Financial Services Information Sharing and Analysis Center). He represented the financial services sector at the national command center over the Y2K weekend and testified before Congress about cyber security in 2001. He has participated in a number of initiatives at sector and national levels.
Dr. Axelrod was honored with the prestigious ISE (Information Security Executive) Luminary Leadership Award in 2007 and, in 2003, he received the Computerworld Premier 100 IT Leaders Award and Best in Class Award. His article “Accounting for Value and Uncertainty in Security Metrics” won ISACA’s Michael P. Cangemi Best Book/Best Article Award in 2009.
Dr. Axelrod has published five books on various IT risk, outsourcing, cyber security, privacy and safety topics. His most recent book is Engineering Safe and Secure Software Systems, released in 2012 by Artech House. He has published three prior articles in CrossTalk magazine.
He holds a Ph.D. (managerial economics) from Cornell University and MA (economics and statistics) and B.Sc. (electrical engineering) honors degrees from the University of Glasgow. He is certified as a CISSP and CISM.

Phone 917-670-1720

C. Warren Axelrod, Ph.D.

Click to view image

C. Warren Axelrod, Ph.D., is a senior consultant with Delta Risk, a consultancy specializing in cyber defense, resiliency, and risk management. Previously, Axelrod was the chief privacy officer and business information security officer for US Trust.
He was a co-founder of the FS-ISAC (Financial Services Information Sharing and Analysis Center). He represented the financial services sector at the national command center over the Y2K weekend and testified before Congress about cyber security in 2001. He has participated in a number of initiatives at sector and national levels.
Dr. Axelrod was honored with the prestigious ISE (Information Security Executive) Luminary Leadership Award in 2007 and, in 2003, he received the Computerworld Premier 100 IT Leaders Award and Best in Class Award. His article “Accounting for Value and Uncertainty in Security Metrics” won ISACA’s Michael P. Cangemi Best Book/Best Article Award in 2009.
Dr. Axelrod has published five books on various IT risk, outsourcing, cyber security, privacy and safety topics. His most recent book is Engineering Safe and Secure Software Systems, released in 2012 by Artech House. He has published three prior articles in CrossTalk magazine.
He holds a Ph.D. (managerial economics) from Cornell University and MA (economics and statistics) and B.Sc. (electrical engineering) honors degrees from the University of Glasgow. He is certified as a CISSP and CISM.

Phone 917-670-1720

« Previous Next »