Personal tools
Skip to content. | Skip to navigation
The core API of hamcrest matcher framework to be used by third-party framework providers. This includes the a foundation set of matcher implementations for common operations.
Authenticating with username and password Atlas Go can automatically generate an API authentication token given a username and password. For example: client := atlas.DefaultClient() token, err := client.Login("username", "password") if err != nil { panic(err) } The Login function returns an API token that can be used to sign requests. This function also sets the Token parameter on the Atlas Client, so future requests are signed with this access token. If you have two-factor authentication enabled, you must manually generate an access token on the Atlas website.
Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable. Consul provides several key features: Service Discovery - Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. External services such as SaaS providers can be registered as well. Health Checking - Health Checking enables Consul to quickly alert operators about any issues in a cluster. The integration with service discovery prevents routing traffic to unhealthy hosts and enables service level circuit breakers. Key/Value Storage - A flexible key/value store enables storing dynamic configuration, feature flagging, coordination, leader election and more. The simple HTTP API makes it easy to use anywhere. Multi-Datacenter - Consul is built to be datacenter aware, and can support any number of regions without complex configuration.
Consul comes with support for a beautiful, functional web UI out of the box. This UI can be used for viewing all services and nodes, viewing all health checks and their current status, and for reading and setting key/value data. The UI automatically supports multi-datacenter. For ease of deployment, the UI is distributed as static HTML and JavaScript. You do not need a separate web server to run the web UI. The Consul agent itself can be configured to serve the UI. The UI is available at the /ui path on the same port as the HTTP API. By default this is http://localhost:8500/ui.
Nomad is a cluster manager, designed for both long lived services and short lived batch processing workloads. Developers use a declarative job specification to submit work, and Nomad ensures constraints are satisfied and resource utilization is optimized by efficient task packing. Nomad supports all major operating systems and virtualized, containerized, or standalone applications. The key features of Nomad are: Docker Support: Jobs can specify tasks which are Docker containers. Nomad will automatically run the containers on clients which have Docker installed, scale up and down based on the number of instances request, and automatically recover from failures. Multi-Datacenter and Multi-Region Aware: Nomad is designed to be a global-scale scheduler. Multiple datacenters can be managed as part of a larger region, and jobs can be scheduled across datacenters if requested. Multiple regions join together and federate jobs making it easy to run jobs anywhere. Operationally Simple: Nomad runs as a single binary that can be either a client or server, and is completely self contained. Nomad does not require any external services for storage or coordination. This means Nomad combines the features of a resource manager and scheduler in a single system. Distributed and Highly-Available: Nomad servers cluster together and perform leader election and state replication to provide high availability in the face of failure. The Nomad scheduling engine is optimized for optimistic concurrency allowing all servers to make scheduling decisions to maximize throughput. HashiCorp Ecosystem: Nomad integrates with the entire HashiCorp ecosystem of tools. Along with all HashiCorp tools, Nomad is designed in the unix philosophy of doing something specific and doing it well. Nomad integrates with tools like Packer, Consul, and Terraform to support building artifacts, service discovery, monitoring and capacity management.
Client agent for Nomad
Server agent for Nomad
Packer is a tool for building identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer comes out of the box with support for the following platforms: Amazon EC2 (AMI). Both EBS-backed and instance-store AMIs DigitalOcean Docker Google Compute Engine OpenStack Parallels QEMU. Both KVM and Xen images. VirtualBox VMware Support for other platforms can be added via plugins. After Packer is installed, create your first template, which tells Packer what platforms to build images for and how you want to build them. In our case, we'll create a simple AMI that has Redis pre-installed. Save this file as quick-start.json. Be sure to replace any credentials with your own. { "builders": [{ "type": "amazon-ebs", "access_key": "YOUR KEY HERE", "secret_key": "YOUR SECRET KEY HERE", "region": "us-east-1", "source_ami": "ami-de0d9eb7", "instance_type": "t1.micro", "ssh_username": "ubuntu", "ami_name": "packer-example {{timestamp}}" }] } Next, tell Packer to build the image: $ packer build quick-start.json ... Packer will build an AMI according to the "quick-start" template. The AMI will be available in your AWS account. To delete the AMI, you must manually delete it using the AWS console. Packer builds your images, it does not manage their lifecycle. Where they go, how they're run, etc. is up to you.
Serf is a decentralized solution for service discovery and orchestration that is lightweight, highly available, and fault tolerant. Serf runs on Linux, Mac OS X, and Windows. An efficient and lightweight gossip protocol is used to communicate with other nodes. Serf can detect node failures and notify the rest of the cluster. An event system is built on top of Serf, letting you use Serf's gossip protocol to propagate events such as deploys, configuration changes, etc. Serf is completely masterless with no single point of failure. Here are some example use cases of Serf, though there are many others: Discovering web servers and automatically adding them to a load balancer Organizing many memcached or redis nodes into a cluster, perhaps with something like twemproxy or maybe just configuring an application with the address of all the nodes Triggering web deploys using the event system built on top of Serf Propagating changes to configuration to relevant nodes. Updating DNS records to reflect cluster changes as they occur. Much, much more.
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. The key features of Terraform are: Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used. Execution Plans: Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure. Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure. Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors. For more information, see the introduction section of the Terraform website.