Personal tools
Skip to content. | Skip to navigation
This is a terraform provider that lets you provision servers on a libvirt host via Terraform. You should update your .terraformrc and refer to the binary: providers { libvirt = "/usr/bin/terraform-provider-libvirt" } Using the provider Here is an example that will setup the following: A virtual server resource (create this as libvirt.tf and run terraform commands from this directory): provider "libvirt" { uri = "qemu:///system" } You can also set the URI in the LIBVIRT_DEFAULT_URI environment variable. Now, define a libvirt domain: resource "libvirt_domain" "terraform_test" { name = "terraform_test" } Now you can see the plan, apply it, and then destroy the infrastructure: $ terraform plan $ terraform apply $ terraform destroy
Nomad is a cluster manager, designed for both long lived services and short lived batch processing workloads. Developers use a declarative job specification to submit work, and Nomad ensures constraints are satisfied and resource utilization is optimized by efficient task packing. Nomad supports all major operating systems and virtualized, containerized, or standalone applications. The key features of Nomad are: Docker Support: Jobs can specify tasks which are Docker containers. Nomad will automatically run the containers on clients which have Docker installed, scale up and down based on the number of instances request, and automatically recover from failures. Multi-Datacenter and Multi-Region Aware: Nomad is designed to be a global-scale scheduler. Multiple datacenters can be managed as part of a larger region, and jobs can be scheduled across datacenters if requested. Multiple regions join together and federate jobs making it easy to run jobs anywhere. Operationally Simple: Nomad runs as a single binary that can be either a client or server, and is completely self contained. Nomad does not require any external services for storage or coordination. This means Nomad combines the features of a resource manager and scheduler in a single system. Distributed and Highly-Available: Nomad servers cluster together and perform leader election and state replication to provide high availability in the face of failure. The Nomad scheduling engine is optimized for optimistic concurrency allowing all servers to make scheduling decisions to maximize throughput. HashiCorp Ecosystem: Nomad integrates with the entire HashiCorp ecosystem of tools. Along with all HashiCorp tools, Nomad is designed in the unix philosophy of doing something specific and doing it well. Nomad integrates with tools like Packer, Consul, and Terraform to support building artifacts, service discovery, monitoring and capacity management.
Client agent for Nomad
Server agent for Nomad
Packer is a tool for building identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer comes out of the box with support for the following platforms: Amazon EC2 (AMI). Both EBS-backed and instance-store AMIs DigitalOcean Docker Google Compute Engine OpenStack Parallels QEMU. Both KVM and Xen images. VirtualBox VMware Support for other platforms can be added via plugins. After Packer is installed, create your first template, which tells Packer what platforms to build images for and how you want to build them. In our case, we'll create a simple AMI that has Redis pre-installed. Save this file as quick-start.json. Be sure to replace any credentials with your own. { "builders": [{ "type": "amazon-ebs", "access_key": "YOUR KEY HERE", "secret_key": "YOUR SECRET KEY HERE", "region": "us-east-1", "source_ami": "ami-de0d9eb7", "instance_type": "t1.micro", "ssh_username": "ubuntu", "ami_name": "packer-example {{timestamp}}" }] } Next, tell Packer to build the image: $ packer build quick-start.json ... Packer will build an AMI according to the "quick-start" template. The AMI will be available in your AWS account. To delete the AMI, you must manually delete it using the AWS console. Packer builds your images, it does not manage their lifecycle. Where they go, how they're run, etc. is up to you.
Serf is a decentralized solution for service discovery and orchestration that is lightweight, highly available, and fault tolerant. Serf runs on Linux, Mac OS X, and Windows. An efficient and lightweight gossip protocol is used to communicate with other nodes. Serf can detect node failures and notify the rest of the cluster. An event system is built on top of Serf, letting you use Serf's gossip protocol to propagate events such as deploys, configuration changes, etc. Serf is completely masterless with no single point of failure. Here are some example use cases of Serf, though there are many others: Discovering web servers and automatically adding them to a load balancer Organizing many memcached or redis nodes into a cluster, perhaps with something like twemproxy or maybe just configuring an application with the address of all the nodes Triggering web deploys using the event system built on top of Serf Propagating changes to configuration to relevant nodes. Updating DNS records to reflect cluster changes as they occur. Much, much more.
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. The key features of Terraform are: Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used. Execution Plans: Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure. Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure. Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors. For more information, see the introduction section of the Terraform website.
Jenkins monitors executions of repeated jobs, such as building a software project or jobs run by cron. Among those things, current Jenkins focuses on the following two jobs: - Building/testing software projects continuously, just like CruiseControl or DamageControl. In a nutshell, Jenkins provides an easy-to-use so-called continuous integration system, making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. The automated, continuous build increases the productivity. - Monitoring executions of externally-run jobs, such as cron jobs and procmail jobs, even those that are run on a remote machine. For example, with cron, all you receive is regular e-mails that capture the output, and it is up to you to look at them diligently and notice when it broke. Jenkins keeps those outputs and makes it easy for you to notice when something is wrong.
A Python wheel of wheel to use with virtualenv.
Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2.