1:-Microsoft virtualization 1.1:-Hyper-V 1.1.1:-Introduction 1.1.2:-Architecture 1.2:-App-V 1.3:-MED-V 1:-MICROSOFT VIRTUALIZATION Virtualization has rapidly grown from a technology. It used for labs and development work to a core IT infrastructure technology. Virtualization has always been a complex technology, for reducing this complexity Microsoft has founded a various virtualization products with similar-sounding yet nondescript names such as Hyper-V, App-V, and MED-V ,Microsoft User Environment Virtualization (UE-V),Remote Desktop Services, and system center Virtual Machine Manager. Each one is designed to provide a solution to a different business problem. To understand this virtualization we need to know about the Microsoft virtualization terminology and methodology.
1.1:-HYPER-V
1.1.1:-Introduction
Hyper-v can create virtual machines on x86-64 systems. A host server running Hyper-V could be accessed remotely by multiple guest computers .And this guest computer could perform as if they are using the host server directly. If the application is not available on the guest computer, users on the guest computers could run applications in the host server remotely.
Hyper-V is a hypervisor-based virtualization solution, which means that the software layer providing the virtualization support runs directly on the physical system hardware. This configuration provides a high-performance virtualization platform. A finalized version was released on June 26, 2008.
Virtual Machine Security - Full Virtualization and Para Virtualization are two kinds of virtualization in a cloud computing paradigm. In full virtualization, entire hardware architecture is replicated virtually. However, in para virtualization, an operating system is modified so that it can be run concurrently with other operating systems. VMM Instance Isolation ensures that different instances running on the same physical machine are isolated from each other. However, current VMMs do not offer perfect isolation. Many bugs have been found in all popular VMMs that allow escaping from VM (Virtual machine). Vulnerabilities have been found in all virtualization software, which can be exploited by malicious users to bypass certain security restrictions or/and gain escalated privileges. ation software running on or being developed for cloud computing platforms presents different security challenges. It is depending on the delivery model of that particular platform. Flexibility, openness and public availability of cloud infrastructure are threats for application security. The existing vulnerabilities like Presence of trap doors, overflow problems, poor quality code etc. are threats for various attacks. Multi-tenant environment of cloud platforms, the lack of direct control over the environment, and access to data by the cloud platform vendor; are the key issues for using a cloud application. Preserving integrity of applications being executed in remote machines is an open
Virtualization can reduce the number of physical systems we need to acquire, and we can get more value out of the servers. Most traditionally built systems are underutilized and Virtualization allows maximum use of the hardware investment. With virtualization, you can also run multiple types of applications and even run different operating systems for those applications on the
Virtualization’s rate of adoption is completely characterized by the five characteristics described in the framework for the concepts of innovation (Luftman & Bullen, 2004, p. 189). It is perceived to be better than physical servers in its ability to host multiple operating systems and share the host’s resource. Its encapsulation of resources allows it to operate as if it was a physical machine yet it is totally virtual giving it a relative advantage. It is compatible with all baseline operating systems on the market. Complexity in implementation is minimal making it more attractive to adopt. The vendors allows for free downloads and trials. Its visibility in competition with Microsoft’s Hyper-V has shown multiple advantages. (Luftman & Bullen, 2004, p. 190)
Network Based Virtualization is abstract storage of data applications from the host machine. This is well achieved through fibre channels connection between the machines and the servers running virtualization. The respective operating systems on the separate machines are not a factor to consider as they work independently. For it to achieve its expectations, the following services must be provided as below:
The first physical server required is installed with the RD Virtualization Host role, which has Hyper-V as a prerequisite, in order to host virtual machines (VMs). Several RD Virtualization Host servers can be pooled to create a larger array of virtual hosts.
Virtualization is being able to give a physical device the power, through the use of software, to do more than that physical device was technically designed and able to do (Santana, 2014, p. 12). For example, a server can only run one operating system at a time. However, when a hypervisor is used in a server, the hypervisor is a layer of software that acts like the server itself so that many operating systems can be run from that one server. The hardware, in this case a server, has been virtualized. The goal is to use all of the computer’s resources all of the time, and the only way to do that is to have enough things running that the resources are being used consistently and efficiently. An analogy for this could be online classes. If each teacher only had one student, the teacher’s resources of time and expertise would not be utilized efficiently because that one student will not need help all day, every day. If the teacher is assigned to fifteen students, the students can still get help when needed from the teacher, and they would not even be aware that they are not alone in the class. Because it is an online class, the teacher does not need any more physical resources to teach an entire class than was needed for one student. The students are receiving the benefits of being taught by that teacher without needing to be with him or her physically.
Virtualization in a network is the most interesting thing I have learned about. In full virtualization, the virtual machine completely simulates a real physical host. This allows most operating systems and applications to run within the virtual machine without being modified in anyway. I would envision using virtualization when testing a new service or application in the development stages, testing the product on different operating systems. I think virtualization is brilliant, a problem that arises is security and how you go about protecting your data in the virtual machine. Placing a virtual firewall is a good way to protect your machine of routing the memory through physical machines that have a firewall to protect them. A benefit of cloud
Hypervisor is the virtualization layer which is responsible for the virtualization. a virtualization platform that allows multiple virtual machines to run on a physical host at the same time
Virtualisation works by splitting up a physical server into multiple different virtual servers, with each server’s resources being masked from the end point user. It is commonly used by businesses to cut their costs, especially in the realm of web-hosting where a hosting provider will use one powerful server – But cut it into hundreds of smaller servers which can be auctioned off at a cut-throat price to consumers.
30. Which Microsoft product supports virtualization at the server level, including using virtual images to
Now with the same ubiquity that is mentioned as above, there is this rampant need for adopting virtualization and separating software from hardware because the recent catalysts of changing culture in mobility and the flexible, scalable nature of enterprise needs presents before it the option of procuring what it needs, when it needs. Then when the user found the need to expand, it also found the need to divert and so re-allocation of resources and moving one’s safety critical data beyond its four walls and a lot of the attention slowly shifted from converting their fixed costs to their variable costs. So if organizations, find underutilized resources in the
This efficiency can be achieved by Virtualization. [1][2][3] By virtualize, we mean that a single physical resource can be exposed as multiple virtual resources or multiple physical resources can be exposed as a single virtual resource. Resource can be anything like server, an OS, an application, or a storage device. The main aim of virtualization is to efficiently utilize the limited IT resources by making use of many idle resources. [4]
This paper is targeted to provide the real time approach and benefits of incorporating virtualization into
The security aspects of virtualization are of vital importance.The cost benefits of virtualization allows enterprises to significantly reduce the space and electrical power required to run data centers and streamline the management of an ever-growing number of servers. Virtualization also provides means for expedient scalability. Given today 's economic climate and cost-cutting mandates, it is not surprising that a firm analyst Gartner recently predicted that 50 percent of workloads will run inside virtual machines by 2012. According to reports from Odyssey, “beyond the benefits of economic savings and enhanced flexibility in capacity planning, virtualization also introduces a number of threats and challenges to the security of organizational information. Among such threats and challenges is the increased network complexity and diminished visibility of the network traffic flowing within the virtual environment, which makes it difficult to detect Malicious “insider” Activity and Attacks. In the event that an internal malicious user or an attacker manages to compromise the virtualization layer, or hypervisor, this could lead to a compromise of all servers hosted on this virtual environment and as a result all applications and data residing in it.”
There are some major differences between 2008 and 2012 versions of Hyper-V regarding support of memory, storage, network, and its overall manageability. In Windows Server 2008 physical memory was limited to 1TB. There were only 512 virtual processors per host and 4 per virtual machine. Memory per VM was limited to 64GB. There could only be 384 active Virtual Machines with cluster nodes of 16. Looking at the progression to 2012, the physical memory caps at 4TB. There are now 2,048 virtual processors per host and 64 per virtual machine. Memory per VM is now up to 1TB. There can now be 1, 024 active Virtual Machines with cluster nodes of 64. Server 2012 now allows for live storage migration within Hyper-V only being limited by what hardware will allow. The virtual disk format VHDX allows for up to