Friday, June 25, 2010

SaaS Network Security: Securing Information in the Cloud

 

Abstract:

The purpose of this paper is to illustrate the infrastructure security of the network level of the cloud. The paper covers rudimentary network architecture and design. The paper will also address some known threats and vulnerabilities along with possible remediation efforts for risk reduction. The paper will focus on the use of 3rd party datacenters and network security between that and the customer systems

Introduction

To start out, the good news is that the data involved in cloud computing is only vulnerable in three places. The bad news is that those places are at the source, the destination and the network that connects the two. Cloud computing or Software as a Service (SaaS) is becoming the latest buzz word in corporations and small business alike in today’s economy. In an effort to reduce the total cost of ownership of information technology as a whole, business is moving away from purchasing and maintaining their own internal infrastructure of servers and system administrators. Instead, many businesses are maneuvering to reduce the expense of maintaining multiple systems by either creating an internal cloud using virtualization technology or by utilizing third party datacenters specializing in hosting customers’ systems in shared spaces. “Many companies are creating cloud services as a “pay as you go” scenario. These include sharing networks, computers, storage and even software applications.” (Szczygiel, 2010, March) The third party solution provider of the data center can offer many economic advantages for the corporate consumer such as data assurance, redundant systems, and hot/hot disaster recovery sites distributed around the globe. Internet technology has evolved enough to where the information can be accessed from anywhere in the world from a multitude of devices formatted in an assortment of methods.

When adopting a network security strategy for your cloud computing plan, there are several items that must be considered and discussed with the third party service provider. The areas that must be are confidentiality and integrity of information to and from the systems at the business to the datacenter, access control, availability and redefining the network to accommodate the cloud. These need to be defined explicitly before any implementation is to take place.

Network Confidentiality and Integrity

When network confidentiality and integrity is mentioned for cloud security, PKI is uttered and the matter is then left to the implementation team to roll out a certificate based solution. Unfortunately, this is not the only aspect of network confidentiality and integrity that needs addressing. Data sources are being exposed via the Internet in the cloud solution. “Confidentiality issues center on where the data is being stored, how and to where it's moved, and where it might be moving in the future, says Roland Trope, a partner at Trope and Schramm in New York City who is writing a book on cloud computing.” (Acello, 2010)

With the advent of Web 2.0 technologies such as XML, SOAP, AJAX and the multitude of cloud application platforms from a multitude of vendors, vulnerabilities are often exposed even when the traffic is sent over a network connection. In order to mitigate these risks, a Transport Layer Session (TLS) connection should be used using the SSL protocol for web browsers. Internet Protocol Security (IPSec) should be used for communication within the systems containing applications and data used by the corporate consumer. IPSec should be utilized for in situations where information is transferred across any network between data centers, to/from the customer and even within the data center. Keep in mind that the data center is using a shared network and the corporate consumer has little control over applications run on other systems not controlled by it. If another system within the data center is compromised by a different and less security conscious corporate consumer, then network integrity could be compromised. IPSec configured with policies to accept traffic based on source and destination address or networks will help mitigate the risk of a compromised system from sniffing the network for traffic.

Host based firewalls also will mitigate the risk of attack by reducing the surface exposure reducing the amount of ports open. Use of a SSL VPN portal will also increase security since it can offer ability to tunnel all web based and application traffic through the tunnel.

Access Control

Restricting access to the required users from the requisite systems should be adhered to conform to security policy requirements. Access control is extremely important when using third party datacenters because of the inability to audit the network traffic. Monitoring is often limited and delegated to the cloud hosting service.

Adoption of an identity and access management (IAM) solution should be integrated in the early stages of cloud migration planning to coordinate the control of passwords and rights for local systems and the cloud infrastructure. “Organizations also are turning to IAM to meet compliance and regulatory requirements that are putting a greater burden on the security administration function in the form of additional reports, better records of workflow and change requests, and periodic self-assessments.” (Rai & Chukwuma, 2009)

Continuing on with the advantages of a SSL VPN portal solution, the use of a system to control access based on identity and credentials is necessary to be compliant with several legal mandates. VPN portals can also provide auditing based on several points of data including but not limited to time, date, user identity, source IP, data or application accessed. Some VPN portal solutions even offer endpoint detection that will determine if the system accessing the information has required security software updates and anti-virus installed and can limit access until the connection requirements have been remediated.

Another concern is who will really have access to the data. There must be a high level of trust to the cloud service provider and their employees. Another concern is where the data is stored in regards to the physical location of the servers and the replication servers. What are the legal restrictions in possibly storing the information in a jurisdiction other than the one where the business consumer is located? Consider that the company has a government contract that deals with sensitive information. If the data is stored in a foreign data center, it is possible that the data could be copied under laws of that government and used for espionage purposes, be it national, industrial or corporate spying. Also, if the data is kept domestic but stored in another jurisdiction, there could be legal implications in the event of a subpoena or lawsuit to access that information.

Availability

If you don’t have the information available to you when you need it, then it is useless. Most cloud service providers have distributed datacenters that are considered hot sites for disaster recovery. The most vulnerable link in the chain is the business consumer’s access to the Internet. Even though the data and servers have moved to the cloud, the company is still responsible form maintenance of its local resources such as workstations, mobile communication devices and the local network. Since the business will rely on the Internet connection for access to the information, redundant gateways must be instituted in the network design.

Business continuity planning must be incorporated into the security plan at this time. Although the prospects are remote, what are the plans to retrieve data if there the cloud service provider goes out of business or if a dispute occurs between the business consumer and the company? Are there plans if the cloud provider is compromised by an attack? If the data systems goes down, how long would it take to restore it? These are just some of the questions that must be addressed in the plan.

Redundant data centers, multiple portals and hot sites should be required on any contract with an cloud service provider. Service level agreements must be negotiated that will guarantee uptime and determine the repercussions if those have not been met. “Users of individual SaaS products have generally become confident that their vendor is proficient in maintaining security, ensuring that data is backed up and carrying out other support tasks. However, venturing more broadly into “the cloud,” where many applications may be used as services, is a different matter; establishing trust with numerous third-party suppliers is a complex process.” (Lamont, 2010)

“Fortunately, the SaaS model provides numerous answers for these types of challenges. Multi-tenant SaaS services are normally hosted in highly reliable data centers with built-in redundancy.

The best providers also employ separate disaster recovery centers to restore full operations if the primary center is disabled. Redundancy in the communication path is built into this model due to the Internet’s capability to send information via numerous routes.” (Szczygiel, 2010, March)

Network Architecture & Redesign

Collaboration must be done with the cloud service provider in relation to the architecture of the cloud for the business consumer. Virtual and physical firewalls, portals and proxies must be put in place that meets compliance and redundancy requirements. Domain Name System (DNS) management must be setup and established in a secure fashion.

Network zones and tiers are no longer necessary in the local environment anymore. Internal infrastructure is now minimal with possibly the use of a proxy servers, print servers and Internet routers. In the cloud, the segmentation is virtualized, done by security groups, security domains and virtual data centers. In most instances, the systems are virtualized also. “Essentially, virtualization enables one to encapsulate the processing capabilities of a computing resource into a virtual machine and execute the virtual machine in an isolated environment on a host computer. This enables one to run one or more virtual machines on the same host computer, run a virtual machine on a host computer with a different operating system, run a virtual machine in a “sandbox” where the virtual machine’s action cannot modify the host computer, to name just a few applications.” (Lunsford, 2009)

Administrative control is no longer done by the local system administrators. Although the business consumer will likely keep a few IT professionals employed, the reliance is now upon the administrators of the cloud service provider to perform maintenance. Local administrators will need to maintain the local workstations, network appliances and the remaining servers. These IT Staff members must coordinate with the cloud service provider’s administrative staff for maintenance of the virtual machines. Cloud service providers usually maintain the host systems where the virtual systems reside. It must be clear on who maintains security patches on those systems, manages DNS entries and maintains security access of those systems.

Conclusion

Cloud security has multiple environmental vulnerabilities such as insecure APIs in platforms, logical multi-tenancy where information from multiple business are co-mingled in a data center, data protection and leakage, lack of audit controls and denial of service along with other network threats. Securing the network against these threats is a way of mitigating some of the risks involved when trusting the data that is vital to the business to a third party. Just as one would pick up a phone, turn on a radio or television, connecting to the data would be expected and as seamless as possible to those authorized to access it through the device of their choice. The nebulous networked data centers distributed around the world are allowing multitudes of individuals and companies to collaborate on the immediate needs independent of where they are located, given that the user has access to the Internet. All is good…no reason to panic.

 

Works Cited

Acello, R. (2010, April). Get Your Head in the Cloud. ABA Journal, pp. 28-29.

Lamont, J. (2010, January). Building Trust in the Cloud. KM World, p. 13.

Lunsford, D. L. (2009). Virtualization Technologies in Information Systems Education. ournal of Information Systems Education, 339-348.

Rai, S., & Chukwuma, P. (2009, August). Security in a Cloud. Internal Auditor, pp. 21-23.

Szczygiel, J. (2010, March). The 5 C's of SaaS. Security Dealer & Integrator, 69-70.

Thursday, June 10, 2010

Securing a B2B Web Portal Using SSL VPN

Here is another paper for the University of Dallas

 

Abstract

Business to Business (B2B) web portals have become very common in current corporate environments where companies deal with partners, clients, resellers, vendors, vendees, licensees, outsourcers along with anyone they do business with in an ongoing basis. These portals are available for connectivity through the Internet but the access levels are often restricted by access controls based on authenticating a user account connecting to the site. As these portals have evolved, more than just web content is becoming available to the user of the B2B sites. Access controls are becoming more complex and more customizable data is created for specific use based upon confidentiality and integrity. Secure Socket Layer Virtual Private Network (SSL VPN) solutions are trending to the future of connectivity for a variety of users that can transfer data via a secure connection that is initiated over a web portal interface through an encrypted connection across the Internet.

This paper discusses the topic of implementing a B2B portal using a SSL VPN solution for the purpose of establishing secure interface where transfer of business data is exchanged. Concepts that are covered are network architecture, Public Key Infrastructure (PKI), access control list (ACL), end-point detection and popular vendor implementation. Securing electronic commerce via a SSL VPN web portal to an extranet provides the flexibility to communicate via a variety of devices from web enabled phones, Personal Digital Assistants (PDA), kiosk system, Net Books, PC’s and laptops regardless of operating system or browser used. This extensible solution is enabling the further collaboration amongst business around the globe.

Introduction

Businesses, both large and small, have a need to share information. Companies have used messengers to courier data between itself to partners and customers. With the advent of computing, electronic data interchange (EDI) technology quickly took shape. To provide value on how a SSL VPN solution to connect to a B2B portal, a historical perspective on Extranets and VPNs is necessary. The SSL VPN will establish the solution that is the cost effective and extensible to server the greatest variety of end user systems that connect to the B2B portal.

Extranets

“Extranets have been around since the first rudimentary LAN-to-LAN networks which began connecting two different business entities to form WANs.” (Maier, 2000) An Extranet is usually a perimeter network placed on the Internet so that data can be accessed from external business partners. The term Extranet is very broad in nature. “Similar to most emerging technologies, "extranet" does not have a clear definition, either in the academic world or in practice. Yet many people try to give it a descriptive explanation.” (Ling & Yen, 2001) The B2B portal is the entry point for business to share data between entities. Portals have ranged from dial-up Bulletin Board Systems to more sophisticated firewall protected perimeter networks, sometimes referred to as a Demilitarized Zone (DMZ). Extranets have been used for many types of collaboration, Electronic Data Interchange (EDI), Supply Chain Management (SCM), Enterprise Resource Planning (ERP), Customer Resource Management (CRM), Software as a Service (SaaS), and Application Service Providers (ASP) which are just some of the many examples.

Historically, the portal for an Extranet has been a firewall. Access given to the extranet is an open pass-through for certain protocols to specified servers and even to the point where it could be allowed from only a certain IP address or network range. In this example, we see that there is not a separate network segment for the Extranet and the Intranet. This is a simple solution that does not cost very much to implement based on configuration and hardware. Maintenance is very simplified and cost effective too. The data is easily accessible from internal clients as well. The risk is that there is just a single firewall that divides the Intranet/Extranet from the Internet. The below figure is an example of the topology for this.

clip_image002

A single layer extranet solution as shown in Figure 1 is not the most secure solution available. Security considerations have been the impetus to the concept of the perimeter network known as a DMZ. This design architecture has become widely adopted by many enterprises that have implemented a sophisticated Internet presence that has web, B2B, B2C and other connected applications (eg. E-mail). This topology implementation restricts external user access to only the perimeter network. Content can be isolated to a server farm in the perimeter network which leads to simplified sharing and maintenance across the intranet and extranet. An argument could be made that content is safer on the perimeter network due to the isolation of the data. A virus outbreak in on the intranet on the client workstations is less likely to spread to the perimeter network and an attack from the internet is likely to be isolated to systems located only on the perimeter network. User accounts for external users could be segregated from internal user accounts in the corporate networks when using a separate domain (Windows Active Directory) or realm (Unix NIS). More advanced implementations utilize reverse proxy web publishing as opposed to port forwarding. There are some disadvantages to consider in this deployment over the simplified network. Additional hardware is needed for the additional network segments and firewalls. There is a greater overhead for maintenance of user accounts and access rules. In the below Figure 2, this illustrates how a multi-layer perimeter network might appear. This architecture could transform into more complex scenarios by adding more layers of firewalls subnets ad infinitum.

clip_image004

VPN

VPN is the acronym for Virtual Private Network. In a nutshell, a VPN is when one protocol is wrapped within a TCP protocol packet and this packet is sent across a public network to a network that is considered private where the payload is unwrapped and then sent to its destination. The client establishes a virtual tunnel to the VPN server. A variation is the site-to-site VPN tunnel that connects two private networks between two VPN servers. This is in actuality two one way client server connections. Client authentication is usually done via the SLIP (Romkey, 1988)or PPP (Simpson, 1994) authentication methods. In May 1998, Cisco Systems introduced the L2F protocol with RFC 2341 (Kolar, Littlewood, & Valencia, 1998). This was one of the first commercially successful implementations of the VPN concept. In July 1999, Microsoft introduced the PPTP Protocol with RFC 2637 (Zorn, Taarud, Little, Verthein, Pall, & Hamzeh, 1999). PPTP allowed any Windows 95 client or higher to connect to a Windows NT 4 SP 3 or higher server running Routing and Remote Access. Cisco and Microsoft then combined efforts to engineer to L2TP protocol in RFC 2661 (Pall, Palter, Rubens, Townsley, Valencia, & Zorn, 1999). This collaboration combined the ability to do encapsulation with encryption using Kerberos, Certificate (PKI) or pre-shared secret. On the open source front, IPSec transport and tunnel modes based on RFC 2401 (Kent & Atkinson, 1998). Microsoft, Cisco and other networking device manufactures quickly adopted the IPSec standards for interoperability purposes. The main disadvantages of these VPN solutions are that it requires a piece of software on the client for L2F, PPTP and L2TP, and policy configurations for IPSec that can be very complex. While IPSec VPN solutions are seemingly operating system agnostic, there tends to be several interoperability issues between systems. The PPTP and L2TP solutions do require custom software installations for PPTP and L2TP on non-Windows systems and the L2F solution requires proprietary Cisco VPN client software. Due to these limitations, the quest was on to go to a VPN solution that could work from any device regardless of operating system.

SSL VPN

Secure Sockets Layer VPN (SSL VPN) is the solution for connecting to remote networks using trusted and encrypted connectivity without using any preinstalled client software, using dial up networking or configuring policies for IPSec. SSL VPN is able to establish a connection using a standard web browser, such as Microsoft Internet Explorer or Mozilla Firefox. To establish the SSL VPN connection, a user simply has to visit a web page using the HTTPS protocol. The web browser will then download an ActiveX or Java component is downloaded to the system which then wraps the TCP/IP network traffic destined to the remote network. As the traffic traverses the public networks in the SSL VPN, the protocol being used is Transport Layer Security (TLS) first described in RFC 2246 (Allen & Dierks, 1999) and later updated in RFC 5246 (Dierks & Rescorla, 2008). The TLS protocol has been used to secure HTTP, FTP, SMTP and many other protocols used across the web.

The TLS protocol secures the information by encrypting the data payload using Public Key Infrastructure (PKI) methods. In brief, PKI is the usage of certificates to cryptographically encrypt data which insure that the exchange between two parties is secure based upon a third party certificate authority. The information is encrypted after the public key is sent from party #1 to party #2 where the party #2 encrypts the data using the public key from party #1 whereupon party #1 decrypts the information using part #1’s private key. This was a very brief description of the Diffe-Hellman Key exchange (Rescorla, 1999).

SSL VPN solutions can offer additional security features such as the use of Extensible Authentication Protocol (EAP) (Aboda, Simon, & Eronen, 2008). EAP usage normally includes the use of a smart card. This two factor authentication demonstrates that to authorize access to the SSL VPN requires that the user can be identified by something the user has in possession and something the user knows. Another security feature is that the Java or ActiveX client software has the option to do endpoint detection. Endpoint detection can check the client system for minimum requirements for security, such as the presence of anti-virus software, virus signature updates, and security patch levels along with many other configuration settings. Using the endpoint detection functionality helps boost confidence levels for network administrators allowing remote systems connecting to the internal network. Another advantage is the SSL VPN traffic is able to pass through most firewalls and proxy servers without any additional configuration. Any device that has a web browser application is capable of connecting to the SSL VPN portal. This includes PDAs, Smart Phones, the Apple iPhone, laptop system and desktops systems without regard to the underlying operating system.

There are several commercial vendor implementations of SSL VPN. Juniper, SonicWall, F5, Citrix, Cisco and Microsoft are the leaders in this area. The Juniper Networks SA is the current SSL VPN market leader but the Cisco ASA, SonicWall Aventail, F5 Firepass, Citrix Access Gateway and Microsoft IAG/UAG also have substantial market share. Each system has specified strengths and weakness points.

Usage of SSL VPN Solutions as a B2B portal

Setting up the B2B portal does require complex planning and documentation. Once the decision on which vendor best suits the needs of the implementation, the next step is to determine the “who, what and when”. The “who” refers to entities that require access. Once the determination of who is going to be given access to the internal network through the SSL VPN portal, accounts with passwords must be created and securely delivered to the intended recipient. The “what” is the determination of the items the client will have access and which protocols are to be used. Another consideration will be the level and privilege of the access to the intended resources on the extranet. The “when” are the times access will be allowed. It is at this point when this basic information has been collected, policies should start to be formed.

In figure 3, we see an example of a Microsoft IAG SSL VPN portal.

clip_image006

In this screenshot, the user has the ability to launch an application, such as an EDI program, go to extranet web sites, and even explore file on a predetermined server dictated by the profile of the user attributes. The client system has not passed the endpoint detection but the policy in this instance doesn’t require it and has a link to send it to remediation.

B2B portals can be set up for partner access, vendors, distributors, suppliers, logistics services and even for privileged client access. Using a single SSL VPN portal for all of these entities is entirely possible to accomplish due to that attributes of the user. For instance, a supplier could log into the SSL VPN portal and the only systems published to his profile could possibly be the ERP system and web site showing product storage capacity levels. The supplier would not have access to the CRM, distribution lists or any other system that would constitute data leakage. This is a distinct advantage over other VPN solutions where the user would have unencumbered access to the network and each individual system would have to be responsible for the access rights to allow or deny access to data. The problem is that unscrupulous individuals had the ability to prowl the network to discover the systems exposed on the extranet or in other situations, the internal corporate network.

Conclusion

The SSL VPN solution is becoming the best available in a world filled with a disparate systems requiring only a web browser to facilitate the connection. Client access could be behind a proxy or firewall, from a kiosk, or any other Internet connected device. A partner could collaborate on strategic projects on a Project Server while a distributor could upload orders via a CRM solution using the same SSL VPN portal with their sessions and traffic segregated. Data sent across the public network is cryptographically encrypted in the TLS protocol. The portal can be deployed in both the simple and complex extranets topologies as well as allowing controlled access to resources on the internal network.

References

Aboda, B., Simon, D., & Eronen, P. (2008, August). RFC 3748 Extensible Authentication Protocol (EAP) Key Management Framework. Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc5247.txt

Allen, C., & Dierks, T. (1999, January). RFC 2246 The TLS Protocol. Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc2246.txt

Dierks, T., & Rescorla, E. (2008, August). RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2. Retrieved October 29, 2009, from IETD: http://www.ietf.org/rfc/rfc5246.txt

Kent, S., & Atkinson, R. (1998, November). RFC 2401 Security Architecture for the Internet Protocol. Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc2401.txt

Kolar, T., Littlewood, M., & Valencia, A. (1998, May). RFC 2431: Cisco L2F. Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc2341.txt

Ling, R. R., & Yen, D. C. (2001). Extranet: A New Wave of Internet. SAM Advanced Management Journal , 66 (2), p39, 6p.

Maier, P. Q. (2000). Ensuring Extranet Security and Performance. Information Systems Management , 17 (2), pp33-41.

Pall, G. S., Palter, B., Rubens, A., Townsley, W. M., Valencia, A. J., & Zorn, G. (1999, August). RFC 2661 L2TP. Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc2661.txt

Rescorla, E. (1999, June). RFC 2631 Diffie-Hellman Key Agreement Method. Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc2631.txt

Romkey, J. (1988, June). Request for Comments: 1055 A NONSTANDARD FOR TRANSMISSION OF IP DATAGRAMS OVER SERIAL LINES: SLIP. Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc1055.txt

Simpson, W. A. (1994, July). RFC 1661: Point-to-Point Protocol . Retrieved October 29, 2009, from IETF: http://www.ietf.org/rfc/rfc1661.txt

Zorn, G., Taarud, J., Little, W. A., Verthein, W., Pall, G. S., & Hamzeh, K. (1999, July). RFC 2637 Point-to-Point Tunneling Protocol (PPTP). Retrieved October 29, 1009, from IETF: http://www.ietf.org/rfc/rfc2637.txt

DNS Security for E-Commerce

Here is a paper I wrote for the University of Dallas.

DNS Overview

DNS is the familiar term used for Domain Name Service. In teaching DNS to students, this following analogy is often used in my training. DNS is like a phone book. Every name in the phone book has a phone number associated with it. In DNS, every host and associated domain name called a fully qualified domain name (FQDN) has an unique IP address associated with it. Prior to DNS, system on the network that used TCP/IP used host files, which is basically a text file list of every computer on the network and the associated IP address. After some time, managing host files and maintaining systems on the network became too cumbersome, the DNS daemon was developed for the UNIX based systems that dominated computing networks. The concept was very simple; a client would make an iterative query to the DNS server using UDP port 53. Keep in mind that UDP stands for User Datagram Protocol, which is a connectionless protocol. If the DNS server had the domain and host (A) record configured on its system, it would reply to the DNS query with the DNS response containing the IP address for the FQDN. If the DNS server did not have the domain on its server, it would then make a recursive query to the Root DNS servers, and then to the domain designator (e.g. .com, .net, .eu) and then finally to the domain being requested. The DNS server, having done all of that for the client, will then return the DNS response to the client to complete the iterative query and it stores a copy of it in its cache for a time-to-live (TTL) period. Using the UDP protocol made this a very fast transaction when the record was found. This is a distinct advantage over the TCP protocol because of the overhead.


DNS Security

An old saying is that the Internet was safe until someone mentioned security. The DNS Service and protocol were never designed to be secure. The philosophy behind the design was ease of use and open sharing. “The problem is with the DNS protocol; it's insecure.” (Schneier, 2008) Most DNS threats were to man in the middle attacks, spoofing the IP address of the DNS server, doing full zone transfers to unauthorized servers and DNS cache poisoning. It wasn’t until 2008 that a significant warning was issued as Dan Kaminsky published a vulnerability method which led to US-CERT Vulnerability Note VU#800113 (Department of Homelaand Security, 2008). This vulnerability showed how the DNS cache could be targeted and poisoned with a predictable attack method. What happens is that an attacker can corrupt the cache on the DNS server so it will direct users to illegitimate sites used in a new type of phishing scam. For example, if the cache gets polluted for www.woodlawnbank.com, any client making a DNS query for this site will be redirected to the illegitimate site where information could be stolen.

The solution is to follow the DNSSEC RFC recommendations for securing DNS through the use of Public Key Interchange (PKI) usage to verify the origin of the record. The concept is that there is a hierarchy of trusted systems (Wijngaards, 2009). RFC 4033 (http://www.ietf.org/rfc/rfc4033.txt), RFC 4034 (http://www.ietf.org/rfc/rfc4034.txt) and RFC 4035 (http://www.ietf.org/rfc/rfc4035.txt) are now available to be implemented in BIND 9 and Microsoft Windows Server 2008 R2 DNS 64-bit versions. The only current client operating system that takes advantage of DNSSEC natively is Microsoft Windows 7 but others need slight modifications to enable this feature.

Implications to E-Commerce

E-commerce systems must perform a cost/benefit analysis on whether the expenses for implementing the upgrade to Windows 2008 R2 DNS or BIND 9 along with the additional operational costs. Whether using Windows or an open source solution, both iterations have the requirement to run on a 64-bit operating system and current DNS deployments are more commonly run on older 32-bit operating systems requiring newer hardware systems with more memory and faster processors to handle the required overhead. The hardware cost is probably the biggest upfront and in today’s economic situation, companies are reluctant to make this move.

Another consideration is that increased amount of traffic generated by additional packets used in the queries. There is also the need for the more CPU processing power on both the client and server systems and on networking devices due to the increased traffic considerations. Translating this to layman terminology, people will feel that the sites that are utilizing secure DNS will come up slower compared to those that use legacy DNS systems. This consideration would be the public perception of the secure DNS system and whether they are willing to wait the additional time vs. moving on to another site perceived to be faster.

The most probable consideration would be the cost of not implementing a secure DNS solution and facing the consequences of traffic destined to their site getting hijacked and redirected to a phishing site. DNSSEC "sounds like a good idea, but it's hard for me to assess the likelihood of this threat," says Michael Saltzman, vice president of network operations at gig.com, an online music distribution service. "In the pantheon of threats, viruses and more direct packet attacks rate a higher frequency. Those are the ones we worry about more." (Marsan, 2000)

Bibliography

Department of Homelaand Security. (2008, 7 8). US-CERT. Retrieved October 23, 2009, from United States Computer Emergency Readiness Team: http://www.kb.cert.org/vuls/id/800113

Marsan, C. D. (2000, October 16). DNS security upgrade promises a safer 'Net. Retrieved October 23, 2009, from NetworkWorld.com: http://www.networkworld.com/news/2000/1016dnsec.html

Schneier, B. (2008, July 29). The problem is with the DNS protocol; it's insecure. Retrieved October 23, 2009, from Schneier on Security: The problem is with the DNS protocol; it's insecure.

Wijngaards, W. C. (2009). Securing DNS: Extending DNS Servers with a DNSSEC Validator. IEEE Security & Privacy (Sept.-Oct), 36 - 43 .

What is Security in the context of Information Assurance?

This means different things to different people.  The first and foremost principle is the protection of human life.  After that, it is in the eye of the beholder.  To me, we have the foundations of Confidentiality, Integrity and Availability.  This was known as the CIA triad but has been changed to the AIC triad due to the similarity in the acronym of a US Intelligence agency.  If you want information from them, click here. 

Think of security more as a framework.  It cannot impede the flow of business.  It must be there to protect the business and individuals.  Where do you begin when you want to think about security?  First, there are some basic questions you should as yourself.

  • Do I have anything that needs protecting?

If the answer is no, then you have nothing to worry about and you can go off being the free spirit that you always dreamed about.  Unfortunately for the rest of us, we do have something we need to protect.  This can be our identity, money, home, business,  data and most importantly, the ones we love.  This world is a very nice place but there are some very bad things that go on in it.  Go out there and start thinking of all of the things that need protection.  It can become mind boggling and could easily overwhelm you.  Next start thinking of the value of those items you need to protect.  Are some of those things valuable in terms of monetary value, intellectual property, proprietary information or just sentimental.  How could those items be replaced?  This is the process of valuation.  Some values are tangible and some are intangible. 

More to come…

Microsoft Forefront Threat Management Gateway 2010

This is my product that I support at Microsoft.  Affectionately known as TMG, this is a very good firewall product for those out there looking for a solution that will integrate a proxy, secure publishing and IDS/IPS system into one complete package.  Let me plug a book that I had the pleasue of doing a security review prior to publish called Microsoft Forefront Threat Management Gateway (TMG) Administrator’s Companion by Yuri Diogenes, Jim Harrison and Mohit Saxena.  Just go to Amazon purchase it.

If you want to read more about TMG, go to Microsoft Forefront Treat Management Gateway 2010 website where you can even download a free trial.

About Me….

My name is Brennan Crowe.  I have worked at Check Point, Dell SecureWorks and Fiserv. I was a Security Support Engineer on the Microsoft Forefront Edge CTS Security team based in Las Colinas. I have been with Microsoft since September 1997 til 2012. Prior to joining the Security organization, I was with the Windows Server Platforms Networking Team. I have a BA in Economics from the University of North Texas, a MBA in Global Business from the University of Dallas, and currently, I was a candidate for a Master of Science Degree in Information Assurance from the University of Dallas but life got in the way and I never finished my thesis. I have the MCP, MCTIP, MCSA +Security, MCSE +Security, MCT, ITIL Foundations, C|EH, CCNA and the CISSP certifications.

My philosophy on Security is preservation of human life primarily, then using best judgment in making risk analysis and assessment with appropriate countermeasures for the protection of business assets based on fundamental cost/benefit analysis. In short, do what is necessary but don’t go overboard with the cost so they don’t outweigh the benefit.

In my personal life, I am married and have a beautiful daughter that I am very protective of. I love the sports of hockey, college football and baseball. I am a hometown fan, and I follow the Dallas Stars, Texas Rangers, Dallas Mavericks, FC Dallas and of course, the Dallas Cowboys.