Personal tools
Skip to content. | Skip to navigation
A Chef server written in Go, able to run entirely in memory, with optional persistence with saving the in-memory data to disk or using MySQL or Postgres as the data storage backend. Docs: http://goiardi.readthedocs.org/en/latest/index.html
Guava is a suite of core and expanded libraries that include utility classes, Google’s collections, io classes, and much much more. This project is a complete packaging of all the Guava libraries into a single jar. Individual portions of Guava can be used by downloading the appropriate module and its dependencies.
The core API of hamcrest matcher framework to be used by third-party framework providers. This includes the a foundation set of matcher implementations for common operations.
Authenticating with username and password Atlas Go can automatically generate an API authentication token given a username and password. For example: client := atlas.DefaultClient() token, err := client.Login("username", "password") if err != nil { panic(err) } The Login function returns an API token that can be used to sign requests. This function also sets the Token parameter on the Atlas Client, so future requests are signed with this access token. If you have two-factor authentication enabled, you must manually generate an access token on the Atlas website.
Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable. Consul provides several key features: Service Discovery - Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. External services such as SaaS providers can be registered as well. Health Checking - Health Checking enables Consul to quickly alert operators about any issues in a cluster. The integration with service discovery prevents routing traffic to unhealthy hosts and enables service level circuit breakers. Key/Value Storage - A flexible key/value store enables storing dynamic configuration, feature flagging, coordination, leader election and more. The simple HTTP API makes it easy to use anywhere. Multi-Datacenter - Consul is built to be datacenter aware, and can support any number of regions without complex configuration.
Consul comes with support for a beautiful, functional web UI out of the box. This UI can be used for viewing all services and nodes, viewing all health checks and their current status, and for reading and setting key/value data. The UI automatically supports multi-datacenter. For ease of deployment, the UI is distributed as static HTML and JavaScript. You do not need a separate web server to run the web UI. The Consul agent itself can be configured to serve the UI. The UI is available at the /ui path on the same port as the HTTP API. By default this is http://localhost:8500/ui.
Nomad is a cluster manager, designed for both long lived services and short lived batch processing workloads. Developers use a declarative job specification to submit work, and Nomad ensures constraints are satisfied and resource utilization is optimized by efficient task packing. Nomad supports all major operating systems and virtualized, containerized, or standalone applications. The key features of Nomad are: Docker Support: Jobs can specify tasks which are Docker containers. Nomad will automatically run the containers on clients which have Docker installed, scale up and down based on the number of instances request, and automatically recover from failures. Multi-Datacenter and Multi-Region Aware: Nomad is designed to be a global-scale scheduler. Multiple datacenters can be managed as part of a larger region, and jobs can be scheduled across datacenters if requested. Multiple regions join together and federate jobs making it easy to run jobs anywhere. Operationally Simple: Nomad runs as a single binary that can be either a client or server, and is completely self contained. Nomad does not require any external services for storage or coordination. This means Nomad combines the features of a resource manager and scheduler in a single system. Distributed and Highly-Available: Nomad servers cluster together and perform leader election and state replication to provide high availability in the face of failure. The Nomad scheduling engine is optimized for optimistic concurrency allowing all servers to make scheduling decisions to maximize throughput. HashiCorp Ecosystem: Nomad integrates with the entire HashiCorp ecosystem of tools. Along with all HashiCorp tools, Nomad is designed in the unix philosophy of doing something specific and doing it well. Nomad integrates with tools like Packer, Consul, and Terraform to support building artifacts, service discovery, monitoring and capacity management.
Client agent for Nomad
Server agent for Nomad
Packer is a tool for building identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer comes out of the box with support for the following platforms: Amazon EC2 (AMI). Both EBS-backed and instance-store AMIs DigitalOcean Docker Google Compute Engine OpenStack Parallels QEMU. Both KVM and Xen images. VirtualBox VMware Support for other platforms can be added via plugins. After Packer is installed, create your first template, which tells Packer what platforms to build images for and how you want to build them. In our case, we'll create a simple AMI that has Redis pre-installed. Save this file as quick-start.json. Be sure to replace any credentials with your own. { "builders": [{ "type": "amazon-ebs", "access_key": "YOUR KEY HERE", "secret_key": "YOUR SECRET KEY HERE", "region": "us-east-1", "source_ami": "ami-de0d9eb7", "instance_type": "t1.micro", "ssh_username": "ubuntu", "ami_name": "packer-example {{timestamp}}" }] } Next, tell Packer to build the image: $ packer build quick-start.json ... Packer will build an AMI according to the "quick-start" template. The AMI will be available in your AWS account. To delete the AMI, you must manually delete it using the AWS console. Packer builds your images, it does not manage their lifecycle. Where they go, how they're run, etc. is up to you.