A properly designed and architected network is key to the success of any business. The network is the backbone of all services and products that are delivered through it, and if that network is slow, insecure or prone to disruption, the products and services based on it will most certainly fail. A properly layered network ensures that high load in one section of the network, cannot disrupt the operations of another section, this technique also allows you to focus improvement dollars on the busy portions of the network, building extra capacity only where it is needed. The goal of architecting a network is to build it such that even unpresidented growth does not require the network to be redesigned, only augmented.
When architecting a network, the engineers are in a unique position to build security into the network infrastructure, by segmenting the network and controlling access between each segment. This can provide a measure of nearly physical security on your network, analogous to a locked door between a main office, and a secure storage room to which only certain people have a key. Restricting the flow of data between different portions of the network while still allowing regular business functions, ensures that viruses and malacious users cannot easily spread across your entire network.
Security is usually seen as being contrary to usability, and as such is often forsaken as a cost saving measure. Trading security to save training and implementation costs may seem like the right choice at times, but when the inevitable breach occurs, the damage to the reputation of your business, and the relationship with your customers, has an almost incalculable cost. The importance of information security grows each day, as the threat of cyber terrorism increases, and more consumers are demanding that their personal information be treated with care and respect, you cannot afford to remain idle. Information security is the contrivance of confidentiality, integrity, and availability; ensuring that you private data is not disclosed to unauthorized entities, altered and/or damaged in any way, or unavailable due to hardware failure or concerted attack has become mission critical. This protection must encompass the entirety of the information processing system, from the servers and storage networks, down to the notebooks and mobile devices that may contain or have access to sensitive data.
The stratagem for any security system should be defense in depth. Illustrated below is an example of an information security defense architecture that NearSource IT can implement for you.
Our VPN services can be provided through hardware deployed at your own site (requires a compatible connection and ability to change firewall infrastructure), or can be hosted at our facilities, allowing employees who are out of the office using connections at home, in a hotel, or on publicly accessible wireless connections to be confident that their transactions are secure. The third option is a blended solution of the two, using a VPN endpoint hosted at our facilities with a connection to your site, so that employees outside of the office can access your corporate resources in a secure manner.
Not all services provide the ability to have their transactions encrypted, this is especially true of legacy systems. Any service that does authentication or any other sensitive transaction in the clear is a potential source of disclosure. These services must not be allowed to continue running unencrypted traffic across your network, as they are vulnerable to disclosure, replay, and man-in-the-middle attacks. The "Augmented Encryption for Insecure Services" system can provide robust end to end 256bit SSL Encryption, as well as identity verification to ensure that your transaction is not being intercepted or altered. As an alternative, secure tunnels using AES or Blowfish encryption are also available.The real cost of a lost or stolen computer is not so much the capital cost of replacing the physical machine, but the loss or potential disclosure of the data that was contained in that device. Encrypting the hard drive to protect the data, especially on notebook computers, has become almost common place. However, most commercial encryption systems, including even those from PGP, contain encryption by-pass features that allow an unauthenticated user to access the data, or allow a lost password to be recovered. If your data is truly sensitive, then such backdoors enabling access to the encrypted data negate any security benefit of encrypting the data in the first place.
Web Applications are a rapidly growing segment of the application market because they provide a level of flexibility and availability that until recently had been unattainable. With the new found ability to make web applications available offline, the last limitation of the web application has been removed. There are many important aspects to consider when designing and developing a web application; target platform - this can be which browsers will be supported, and which operating systems (especially if the application makes use of architectures like Flash or AIR), but also which type of device the application will be accessible from (mobile phones, netbooks and sub-notebooks, or full laptops and desktops); target audience - is the web app designed to be used by highly technical staff, trained sales or customer relations staff, or by the general public; availability - how tolerant of downtime are the users or business processes that rely on the system; and finally scalability - how rapidly will the number of users, and the size of the data sets increase, and how will the application deal with the additional load. Having a team that has built the network infrastructure and web application architectures for large applications, and has experience with the issues of scaling web based applications, is imperative to the success of your project.
Once your application has been developed and it has started to grow, issues with scalability and redundancy can become apparent. These growing pains are not always obvious at design time and their solutions are not always found in the application it self. Our expert team of network engineers and server administrators, all with programming backgrounds, can analyze your application and its infrastructure to produce a report on how the architecture of the application and the infrastructure that supports it can be adjusted to allow the application to scale to your needs. Traditional database replication provides only a minor scaling benefit, and requires the application to switch between the master and the slaves depending on the type and sensitivity of the query. This is where more advanced techniques become necessary and the value of the experience of our team becomes obvious. Once your application has reached the point that the database is constantly buzzing with activity, it becomes impractical to lock that database to take a consistent backup, again this is a situation that requires a specialized approach that calls for experienced administrators.
The final stage of a web application's development is deploying it in a production environment. There are many complexities to be examined when building the underlying infrastructure that will support the application, these infrastructure elements can also be responsible for providing parts of the security and redundancy that the application relies upon. Many applications have run into the situation where it is not the application it self that cannot scale, but the supporting network framework that has reached its limit. While adding more hardware may temporarily solve the congestion, without having a properly engineered solution for your specific application, it will eventually run into the same limitations yet again. A proper network architecture designed by experienced engineers offers a much longer term solution, giving you peace of mind and keeping your users happy.
With broadband penetration constantly increasing and users attention span ever decreasing, it is important to deliver your content as quickly as possible. No single source of content can provide the faster distribution to all points on the globe, this is why our network to content servers are spread throughout north america and europe. Using special dns techniques we steer users to the nearest subset of content servers while still maintaining fault tolerance and load balancing. Our network pays special attention to convergence so that when you publish new content, is it available throughout the content network as quickly as possible. Obviously the content network cannot store all content on all nodes indefinitely, but using a combination of data partitioning and master archive servers, our network can automatically adapt to changing traffic levels. This means that if some piece of older content suddenly becomes very popular again, the content network will automatically re-converge the data so it is available on all of the nodes. The content network is designed to be flexible and adaptable, such that it can be customized to suite the unique usage patterns of your data, applying specific business logic to the distribution of your content.
Streaming of video, especially high definition video, presents some very unique challenges. Unlike most regular data, video is seekable, meaning that a user can request that the video transmission start at a specific timestamp rather than at the beginning. However most video container formats place their index at the end of the file, so the entire file must be loaded before a seek operation can be performed. Specialized software used in our content network can rewrite the videos in real time, placing this index at the beginning of the video, so a user can seek to some point later in the video, without having to download the entire video first. Our content servers are capable of handling both Flash Video (FLV) and High Definition Mpeg4 (H264, MP4) encoded video.
Monitoring a network and its services encompasses more than just making sure that each service is reachable. It is imperative to ensure that the service is operating correctly, and to establish baseline performance statistics to compare against in the future. Our monitoring system can not only check the availability of your service, but can complete a test transaction to ensure that the service is responding properly. The monitoring system can also collect utilization and performance information so that a change from baseline operations can be detected immediately. The rapid detection will assist in establishing a causal relationship for the deviation before it can cause a disruption. When paired with our DNS Failover and Load Balancing system, any service that is unreachable (or has significantly deviated from its baseline) can have its traffic redirected to another node that is operating normally so that users do not experience any disruption of service.
Most load balancing systems use a round-robin style to distributing the load, sharing the load between the nodes in the assumption that all requests are equally as complex process. Our system monitors the key performance indicators of each node. It distributes the load in a hybrid fashion, using a weighted round-robin, so that all nodes still get some fraction of the requests, yet the nodes that are least busy get a larger portion of the requests. The system also has the capability to immediately remove a node from the pool when it is no longer responding, or has otherwise been flagged as unusable. An additional feature of our load balancing system is network partitioning. You can use a different base pool of servers defendant on where on the network or internet the requesting user is located. This means that you can maintain a separate pool of servers for European users, but also have them spill over to the North American network if necessary. This feature also allows internal users to be redirected to an on-site node to avoid wasting internet facing resources.
Writing detailed technical documents requires a deep and through understand of the subject mater. It is not always feasible to have such people on your permanent staff. Near Source IT has a number of subject area experts that can generate documentation for your systems, whether that documentation is geared towards technical staff (such as server operation manuals, or application maintenance documentation) or towards the end-user.
Enforcing security in a corporate environment requires a clearly defined policy, that is both thorough enough to cover all aspects of security, provides sufficient detail for those implementing the policies, and is clear enough that it can easily be understood by those who have to follow the policies. Having your security policy written by experienced professionals can ensure that all of these requirements are met, and that the policy is more than just a large document whose use is mandated, but for which the implementation is impractical.