Configuration Management tools can be of good use to provision your Docker containers. But you don’t want these tools to end up in your Docker images. Using Data Volume Containers and Docker Compose, we can do this and still have a slim image. Here is how you can do it with Chef.»
If you are tasked with migrating a Team Foundation Server (TFS) code repository to Git and search the net, you will quickly find a number of tools:
The latter two solutions were tried in converting from the TFS repository to a Git repository.»
In a previous article, I talked about converting from TFS to Git and documented the normalization of the line endings in the converted Git repository. Another aspect of source code is whether spaces or tabs should be used.
Again, I’m not going to argument in favor of one or the other, but just document how you can get a consistent setup for your team. This setup however is something that can’t be easily committed in the repository itself like it was done with the line ending configuration. The main reason are the scripts that need to be executed to make sure that any content is processed when checking out or into the repository.»
The setup used in this article is an Ubuntu 12.04 host system with VirtualBox 4.1.18 installed. From disk
/dev/sda the partitions 3 and 4 will be made available to a virtual machine. All virtual machines are run as user
virtualbox. All console samples show clearly the user that runs the command.
There is a lot of documentation on the use of Debian/Ubuntu packaging tools, but most of these pages just list the options of each of these tools. After fighting with these for some period, I jumped over to the
#ubuntu-packaging IRC channel for help. After explaining my intent of backporting some packages unchanged, I was pointed to the backportpackage tool:
backportpackage fetches a package from one distribution release or from a specified .dsc path or URL and creates a no-change backport of that package to one or more Ubuntu releases release, optionally doing a test build of the package and/or uploading the resulting backport for testing.
This tool comes with the
Secure Shell is a great tool for securely connecting between several machines. In the past weeks, I am using it more and more, but I was getting tired of typing too much. I found a great article on setting up passwordless authentication using public/private keys and defining multiple SSH identities, but it still wasn’t enough.»
Continuing with the volume created in the post Sharing EBS Volumes Among Instances, in this post I show how to create a snapshot, create a new volume from that snapshot, and mount the new volume in an instance. Remember that the volume created in the previous post contained 1 file called
In this post I share an experiment to create an EBS volume, to attach it to an EC2 instance, to mount it in the instance, to put a file on it, to unmount it, and to detach it. Afterwards the volume will be mounted in another instance (while the first instance has been terminated, because attaching volumes to different instances at the same time is impossible).»
Section “Setting up the Tools” of the Amazon EC2 Getting Started Guide explains how to set up environment variables, so that the EC2 tools find themselves (EC2_HOME), Java ($JAVA_HOME), the private access key file (EC2_PRVATE_KEY) and the certificate file (EC2_CERT). And they also suggest to change the PATH variable, so that you can run EC2 commands from anywhere.»
In order to experiment with Amazon Web Services, I requested an AWS account and I installed a bunch of software to get started. Here is an overview of what I did to get up and running.»