I've recently read up on Docker and the coming integration with the next Windows Server. Docker, generally speaking, is a framework to encapsulate server-applications and their dependencies into containers and orchestrate their deployment. I have a limited experience with a more client-oriented but similar concept: application virtualization, where you create a package that contains all the files, registry keys and whatnot that make up a program and deliver it to a client to run. The benefit of both solutions is that you don't have to install an application in the static conventional manner, but instead leave it packaged. When there's an update you install it once into your package and the redeploy that centrally, without having to touch the machines that run it. You also don't have to uninstall, as it was never really installed in the first place. Through encapsulation, dependencies like specific versions of Java are bundles up with the applications themselves and do not collide with whatever else you're running. That's a good thing.
Both technologies make a lot of sense for many scenarios, especially when it comes to large deployments. However in my personal experience, I've never seen them applied. In many environments, there is a multitude of very specific applications that the operational tech department (maybe even outsourced) has only a vague understanding of and only does the first few levels of support, but leaves deeper issues to the vendors' support teams. When you introduce a technology that directly affects the application you're running, vendors will always point to that first when there's an issue that isn't transparent, i.e. "it runs fine with others customers and they don't use this technology". Some vendors go as far as explicitly not supporting the deployment of their programs in this way because their developers have no idea about these technologies, did not verify them and can't greenlight their use. From a dev standpoint, it takes a lot of QA effort to actually test applications in this additional way, and if there is not an explicit request from many or important customers, they won't make that investment and take the risk. Instead it is recommended to somehow automate the conventional installation.
The moment you're not installing applications the conventional way, your systems are the first that will be second-guessed. You introduce a new source of uncertainty and thus risk into a layer that is usually no concern for conventional IT. I have made the experience that although it makes a lot of sense to deploy applications both server- and client-sided to a network in this manner, customers will most often refuse because it makes things more complicated and risky. Instead they will much rather have you create a new virtual server to run this service, which of course means more overhead. For the person managing the IT budget, it is easier to ask for more hardware to run additional systems (CEOs can understand that) than to spend more budget on application-related services (creating the containers). Solving software problems with hardware solutions has always been popular.
Of course this seems bad for service providers, as budgets chunks go to hardware vendors instead, but often customer satisfaction seems higher if they don't need to wrap their head around new concepts and instead invest in something they can both understand and touch. Also service providers are also happy to comply, because new hardware means new licenses (operating systems, SQL, backup...) and thus more revenue for them. The technologically best solution is not always in the best interest of IT service providers.
Therefore, Docker-like containers and application virtualization are two technologies that I would like to use more often in my line of work, but apparently doing it the conventional way requires less know-how and is more easily sold to the customers. For cloud providers this is very different of course, there I can see these methods much more readily applied.