You are on page 1of 19

Zenko Documentation

Release 7.0.0

Foo

Jul 12, 2018


Documentation

1 Zenko 1
1.1 May I offer you some lovely documentation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Contributing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Testing Zenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Zenko in Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Zenko Quick Testing Swarm Stack 3


2.1 Preparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Deploying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Zenko Swarm Stack 5


3.1 Preparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Using Zenko Orbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.5 Further Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4 Registering a Zenko Instance on Orbit Management UI 11


4.1 Minikube and MetalK8s (and other Kubernetes-based deployments) . . . . . . . . . . . . . . . . . . 11
4.2 Docker Swarm deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

i
ii
CHAPTER 1

Zenko

Zenko is Scality’s open source multi-cloud data controller.


Zenko provides a unified namespace, access API, and search capabilities for data stored locally (using Docker volumes
or Scality RING) or in public cloud storage services like Amazon S3, Microsoft Azure Blob storage, or Google Cloud
Storage.
Learn more at Zenko.io.

1.1 May I offer you some lovely documentation?

1.2 Contributing

If you’d like to contribute, please review the Contributing Guidelines.

1.3 Overview

This repository includes installation resources to deploy the full Zenko stack over different orchestration systems.
Currently we have Kubernetes and Docker Swarm.

1
Zenko Documentation, Release 7.0.0

1.3.1 Zenko Stack

The stack consists of:


• nginx
• Zenko Cloudserver
• Zenko Backbeat Async Replication Engine
• MongoDB
• redis
all magically configured to talk to each other.

1.4 Testing Zenko

Simple Zenko setup for quick testing with non-production data:


• Zenko Single-Node Kubernetes
• Zenko Docker Swarm Testing

1.5 Zenko in Production

• Includes high availability (HA)


• Asks for pre-existing volumes
Zenko Kubernetes Helm Chart deployment
Deploying a HA Kubernetes cluster
Zenko Docker Swarm HA Deployment

2 Chapter 1. Zenko
CHAPTER 2

Zenko Quick Testing Swarm Stack

This Docker service stack describes a simple Zenko setup for quick testing with non-production data.

2.1 Preparing

Swarm mode must be enabled on the local Docker daemon. See this tutorial for more on Swarm mode.

2.2 Deploying

Deploy the stack:

$ docker stack deploy -c docker-stack.yml zenko-testing


ID NAME MODE REPLICAS IMAGE
5s5ny9y859sj zenko-testing_lb replicated 1/1 zenko/loadbalancer:latest
ei95xqodynoc zenko-testing_s3 replicated 1/1 scality/s3server:latest

Check that the services are up:

$ docker stack services zenko-testing


ID NAME MODE REPLICAS IMAGE
5s5ny9y859sj zenko-testing_lb replicated 1/1 zenko/loadbalancer:latest
ei95xqodynoc zenko-testing_s3 replicated 1/1 scality/s3server:latest

2.3 Testing

You can use awscli to perform S3 operations on your Zenko stack:


IMPORTANT: When default port 80 is in use, it must never be specified after the endpoint address. Any
custom port in use must be specified.

3
Zenko Documentation, Release 7.0.0

$ export AWS_ACCESS_KEY_ID=accessKey1
$ export AWS_SECRET_ACCESS_KEY=verySecretKey1
$ aws s3 --endpoint http://localhost mb s3://bucket1 --region=us-east-1
make_bucket: bucket1
$ aws s3 --endpoint http://localhost ls
2017-06-15 16:42:58 bucket1
$ aws s3 --endpoint http://localhost cp README.md s3://bucket1
upload: ./README.md to s3://bucket1/README.md
$ aws s3 --endpoint http://localhost ls s3://bucket1
2017-06-15 17:36:10 1510 README.md

4 Chapter 2. Zenko Quick Testing Swarm Stack


CHAPTER 3

Zenko Swarm Stack

Note: This stack’s metadata engine has been switched to MongoDB. Updating from a previous version initializes and
puts into use a new database instead of using your existing data.
This Docker service stack describes a simple Zenko production setup, including:
• An nginx-based load balancer on all nodes of the swarm
• Multi-tiered networks (user-facing, DMZ and backend services)
• Thanks to Docker Swarm and its overlay network, virtual ips and a scheduler, high availability and service
resiliency (storage node excluded).

3.1 Preparing

3.1.1 Swarm

Swarm mode must be enabled on the local Docker daemon. See this tutorial for more on Swarm mode.

3.1.2 Storage Node Selection

Because we are using direct filesystem storage, there is no replication of the actual data. One specific node in the
swarm must be selected for storage.
As this storage node is responsible for the data, it’s best to put the storage directories on fast, reliable disks. A
backup/restore policy is also highly recommended.
From a manager node, locate the node that will host the data and metadata:

$ docker node ls
ID HOSTNAME STATUS
˓→AVAILABILITY MANAGER STATUS
emuws4813jejap6a22n2sk10n s3-node-zenko-swarm-4.na.scality.cloud Ready Active
(continues on next page)

5
Zenko Documentation, Release 7.0.0

(continued from previous page)


gz2cs88bmpdi4pe1scbs6iqqz s3-node-zenko-swarm-3.na.scality.cloud Ready Active
n5vcd4tqyo443sh4n82gcewqx s3-node-zenko-swarm-2.na.scality.cloud Ready Active
ng8quztnef0r1x90le4d6lssj s3-node-zenko-swarm-1.na.scality.cloud Ready Active
w43z9jeujmolyoic5ivd5tft4 * s3-node-zenko-swarm-0.na.scality.cloud Ready Active
˓→ Leader

Here, we choose the host s3-node-zenko-swarm-1.na.scality.cloud with ID


ng8quztnef0r1x90le4d6lssj. To ensure that Docker Swarm only schedules persistent containers to
this node, assign the io.zenko.type label with the value storage to the node:

$ docker node update --label-add io.zenko.type=storage ng8quztnef0r1x90le4d6lssj


ng8quztnef0r1x90le4d6lssj

Check that the label has been applied:

$ docker node inspect ng8quztnef0r1x90le4d6lssj -f '{{ .Spec.Labels }}'


map[io.zenko.type:storage]

Note: If you skip this step, some services in the stack will remain pending and will never be scheduled.

3.1.3 Storage Volumes

Volumes are automatically created by Docker Swarm as needed.


Note: Deleting the stack from the swarm also deletes the data.

3.1.4 Zenko Orbit

By default, the stack registers itself at the Zenko Orbit portal and uploads anonymous stats. Zenko Orbit allows easy
configuration of users, remote storage locations, replication and more, as well as instance monitoring.
To opt out of remote management and monitoring, export this environment variable before deployment:

$ export REMOTE_MANAGEMENT_DISABLE=1
$ docker stack deploy -c docker-stack.yml zenko-prod
[...]

3.1.5 Access and Secret Keys

SKIP THIS STEP IF YOU ARE USING ZENKO ORBIT.


The default access and secret key pair is deployment-specific-access-key /
deployment-specific-secret-key. You must change them. Do this by updating the
SCALITY_ACCESS_KEY_ID and SCALITY_SECRET_ACCESS_KEY environment variables in the secrets.
txt file.

3.1.6 Endpoint Name

SKIP THIS STEP IF YOU ARE USING ZENKO ORBIT.


By default, the endpoint name is zenko. You can change this to the host name presented to your clients (for example,
s3.mydomain.com) by exporting the ENDPOINT environment variable before deployment:

6 Chapter 3. Zenko Swarm Stack


Zenko Documentation, Release 7.0.0

$ export ENDPOINT=s3.mydomain.com

3.2 Deployment

Deploy the stack:

$ docker stack deploy -c docker-stack.yml zenko-prod


Creating network zenko-prod_frontend
Creating network zenko-prod_backend
Creating network zenko-prod_frontend-dmz
Creating secret zenko-prod_s3-credentials
Creating service zenko-prod_quorum
Creating service zenko-prod_mongodb
Creating service zenko-prod_queue
Creating service zenko-prod_s3-front
Creating service zenko-prod_lb
Creating service zenko-prod_backbeat-consumer
Creating service zenko-prod_backbeat-api
Creating service zenko-prod_s3-data
Creating service zenko-prod_backbeat-producer
Creating service zenko-prod_cache
Creating service zenko-prod_mongodb-init

Check that the services are up:

$ docker stack services zenko-prod


ID NAME MODE REPLICAS
˓→ IMAGE PORTS
1j8jb41llhtm zenko-prod_s3-data replicated 1/1
˓→ zenko/cloudserver:pensieve-3 * :30010->9991/tcp
3y7vayna97bt zenko-prod_s3-front replicated 1/1
˓→ zenko/cloudserver:pensieve-3 *:30009->8000/tcp
957xksl0cbge zenko-prod_mongodb-init replicated 0/1
˓→ mongo:3.6.3-jessie
cn0v7cf2jxkb zenko-prod_queue replicated 1/1
˓→ wurstmeister/kafka:1.0.0 *:30008->9092/tcp
jjx9oabeugx1 zenko-prod_mongodb replicated 1/1
˓→ mongo:3.6.3-jessie *:30007->27017/tcp
o530bkuognu5 zenko-prod_lb global 1/1
˓→ zenko/loadbalancer:latest *:80->80/tcp
r69lgbue0o3o zenko-prod_backbeat-api replicated 1/1
˓→ zenko/backbeat:pensieve-4
ut0ssvmi10tx zenko-prod_backbeat-consumer replicated 1/1
˓→ zenko/backbeat:pensieve-4
vj2fr90qviho zenko-prod_cache replicated 1/1
˓→ redis:alpine *:30011->6379/tcp
vqmkxu7yo859 zenko-prod_quorum replicated 1/1
˓→ zookeeper:3.4.11 *:30006->2181/tcp
y7tt98x7jdl9 zenko-prod_backbeat-producer replicated 1/1
˓→ zenko/backbeat:pensieve-4
[...]

Note: Having 0 replicas of the mongodb-init service is fine, because it is expected to execute successfully only once
to initialize the mongodb replica set.

3.2. Deployment 7
Zenko Documentation, Release 7.0.0

3.3 Using Zenko Orbit

To get your instance’s Zenko Orbit identifier and claim it in the portal, issue this command:

$ docker service logs zenko-prod_s3-front | grep -i instance \


zenko-prod_s3-front.1.khz73ag06k2k@moby | {"name":"S3","time":1512424260154,\
"req_id":"115779d9564e960048a5","level":"info","message":"this deployment's \
Instance ID is ce1bcdb7-8e30-4e3f-b7a2-9424078c9159","hostname": \
"843d31bf15f0", "pid":28}

Go to Zenko Orbit to manage your deployment through a nifty UI.

3.4 Testing

To use the tests folder, update the credentials in Zenko/tests/utils/s3SDK.js with credentials generated
in Zenko Orbit. Install node modules with npm install. Then, run npm test.
You can use awscli to perform S3 operations on the Zenko stack. Because the load balancer container is deployed in
global mode, we can use any of the swarm nodes as the endpoint.
For the default zenko host name, substitute either the ENDPOINT variable configured above (if applicable), or what-
ever the hostname -f command returns.
IMPORTANT: When default port 80 is in use, it must never be specified after the endpoint address. Any
custom port in use must be specified.

$ export AWS_ACCESS_KEY_ID=deployment-specific-access-key
$ export AWS_SECRET_ACCESS_KEY=deployment-specific-secret-key
$ aws s3 --endpoint http://zenko mb s3://bucket1 --region=us-east-1
make_bucket: bucket1
$ aws s3 --endpoint http://zenko ls
2017-06-20 00:12:14 bucket1
$ aws s3 --endpoint http://zenko cp README.md s3://bucket1
upload: ./README.md to s3://bucket1/README.md
$ aws s3 --endpoint http://zenko ls s3://bucket1
2017-06-20 00:12:53 5052 README.md

3.4.1 Metadata Search

Metadata search can be tested from within the S3-frontend container.


First, from your machine (not within the S3 Docker), create some objects:

$ aws s3api put-object --bucket bucket1 --key findme1 --endpoint-url http://127.0.0.1


˓→--metadata "color=blue"

$ aws s3api put-object --bucket bucket1 --key leaveMeAlone2 --endpoint-url http://127.


˓→0.0.1 --metadata "color=red"

$ aws s3api put-object --bucket bucket1 --key findme2 --endpoint-url http://127.0.0.1


˓→--metadata "color=blue"

From within the S3-frontend container:

$ bin/search_bucket.js -a accessKey1 -k verySecretKey1 -b bucket1 -q "userMd.\`x-amz-


˓→meta-color\`=\"blue\"" -h 127.0.0.1 -p 8000

8 Chapter 3. Zenko Swarm Stack


Zenko Documentation, Release 7.0.0

3.5 Further Improvements

• Allow use of an external environment vars file.


• Include a log collection and visualization component.
• Include health checks in the zenko/cloudserverserver image.
• Explain how to scale/troubleshoot services and replace the storage node.

3.5. Further Improvements 9


Zenko Documentation, Release 7.0.0

10 Chapter 3. Zenko Swarm Stack


CHAPTER 4

Registering a Zenko Instance on Orbit Management UI

This section of documentation will help you register your Zenko instance on Orbit, for either minikube-based or
MetalK8s-based or Docker Swarm-based deployments.

4.1 Minikube and MetalK8s (and other Kubernetes-based deploy-


ments)

4.1.1 Step 1: go to the Kubernetes dashboard

For minikube:

$ minikube dashboard

A new tab should open in your browser, taking you to the Kubernetes dashboard.

For MetalK8s:

$ kubectl proxy

While the tunnel is up and running, access the dashboard at http://localhost:8001/api/v1/namespaces/kube-system/


services/https:kubernetes-dashboard:/proxy/.

4.1.2 Step 2: finding the Zenko instance ID

Option A: from the CloudServer logs

From the dashboard, click on the cloudserver-front service, and then on one of the pods running for that service. On
the top right menu, click “Logs”, and then do a search of “instance ID” - you should find a line in the logs giving you a

11
Zenko Documentation, Release 7.0.0

long string of random characters: this is your instance id, copy it. If you cannot find such a line in the logs, try option
B.

Option B: from the Metadata MongoDB cluster

From the dashboard, click on the mongodb-replicaset service, and then on the pod that is mongodb-replicaset-0. For
fresh deployments, the pod 0 should be the MongoDB master, and note MongoDB can only be queried from the master
pod.
TIP: if the following procedure doesn’t work for you, and you get an error like "not master and
slaveOk=false", it means the pod inside which you are executing those commands is not the master.
In that case, simply use db.isMaster.primary to find the master node, execute the procedure inside
that node.
Then hit “Exec”, in the top right menu of the dashboard, and use the CLI to query MongoDB as follows

$ mongo
MongoDB shell version v3.4.15
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.15
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2018-05-25T15:44:08.084+0000 I STORAGE [initandlisten]
2018-05-25T15:44:08.084+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS
˓→filesystem is strongly recommended with the WiredTiger storage engine

2018-05-25T15:44:08.084+0000 I STORAGE [initandlisten] ** See http://dochub.


˓→mongodb.org/core/prodnotes-filesystem

2018-05-25T15:44:08.503+0000 I CONTROL [initandlisten]


2018-05-25T15:44:08.503+0000 I CONTROL [initandlisten] ** WARNING: Access control is
˓→not enabled for the database.

2018-05-25T15:44:08.504+0000 I CONTROL [initandlisten] ** Read and write


˓→access to data and configuration is unrestricted.

2018-05-25T15:44:08.504+0000 I CONTROL [initandlisten] ** WARNING: You are running


˓→this process as the root user, which is not recommended.

2018-05-25T15:44:08.504+0000 I CONTROL [initandlisten]

rs0:PRIMARY> use metadata


switched to db metadata

rs0:PRIMARY> db.PENSIEVE.findOne({_id: "auth/zenko/remote-management-token"})


{
"_id" : "auth/zenko/remote-management-token",
"value" : {
"instanceId" : "abc12345-5689-4f15-b0e6-5a9c17e7ae01",
"issueDate" : "2018-05-25T15:45:56Z",
(...)
}
}

In this json object, you can find your instance ID in the value dictionary.

12 Chapter 4. Registering a Zenko Instance on Orbit Management UI


Zenko Documentation, Release 7.0.0

4.1.3 Step 3: Registering with Orbit

Go to Orbit’s homepage, hit “Get Started”, and log in with your Google Account (we comply with GDPR regulations).
You will be welcomed by a charming astronaut fox, offering you to take a tour, or to install now. Feel free to take the
tour to discover all of Orbit’s capabilities, or hit “Install Now” if you can’t wait! You will land on this page:

Hit “Register My Instance”


On this page, put your instance ID in the left box, and choose your instance friendly name in the right box, then simply
hit “Submit Now”

You’re all set! Enjoy Orbit and Zenko, and reach out on the forum if you need anything!

4.1. Minikube and MetalK8s (and other Kubernetes-based deployments) 13


Zenko Documentation, Release 7.0.0

4.2 Docker Swarm deployments

4.2.1 Step 1: retrieve your instance ID from CloudServer logs

Find your CloudServer frontend service in your zenko stack, and search its logs:

$ docker stack services {{STACK_NAME}} | grep front


3y7vayna97bt {{STACK_NAME}}_s3-front replicated 1/1
˓→ zenko/cloudserver:pensieve-3 * :30009->8000/tcp

$ docker service logs {{STACK_NAME}}_s3-front | grep -i instance


{{STACK_NAME}}_s3-front.1.khz73ag06k2k@moby | {"name":"S3","time":1512424260154,
˓→"req_id":"115779d9564e960048a5","level":"info","message":"this deployment's

˓→Instance ID is "abc12345-5689-4f15-b0e6-5a9c17e7ae01","hostname":"843d31bf15f0","pid

˓→":28}

4.2.2 Step 2: Registering with Orbit

Go to Orbit’s homepage, hit “Get Started”, and log in with your Google Account (we comply with GDPR regulations).
You will be welcomed by a charming astronaut fox, offering you to take a tour, or to install now. Feel free to take the
tour to discover all of Orbit’s capabilities, or hit “Install Now” if you can’t wait! You will land on this page:

Hit “Register My Instance”


On this page, put your instance ID in the left box, and choose your instance friendly name in the right box, then simply
hit “Submit Now”

14 Chapter 4. Registering a Zenko Instance on Orbit Management UI


Zenko Documentation, Release 7.0.0

You’re all set! Enjoy Orbit and Zenko, and reach out on the forum if you need anything!

4.2. Docker Swarm deployments 15

You might also like