Skip to content
Snippets Groups Projects
Commit d38d2ff8 authored by WeirtyLagomme's avatar WeirtyLagomme
Browse files

Rendu final

parents
Branches
No related tags found
No related merge requests found
# DAC - MICKAEL GOMEZ
## Instances
J'ai travaillé avec 4 instances openstack :
* gomez-1-1 ➜ 172.28.100.186 (LoadBalancer)
* gomez-1-2 ➜ 172.28.100.64
* gomez-1-3 ➜ 172.28.100.77
* gomez-1-4 ➜ 172.28.100.22
## Serveur web
Simple serveur python
```bash
python -m http.server <PORT>
```
Image Docker basée sur Alpine
```Docker
FROM python:3.7-alpine
COPY index.html /
# Ces 2 lignes sont générées au déploiement
EXPOSE <PORT>
CMD python -m http.server <PORT>
```
## Déploiement
### Deux conditions de bon fonctionnement
Le fichier `instances` contient mes instances cibles
```
172.28.100.64
172.28.100.77
172.28.100.22
```
Ma clé ssh liée aux instances est ici : `~/.ssh/id_rsa`
### Script de déploiement
Je n'ai qu'à lancer `./deploy.sh`, et c'est tout : il va lui-même se charger de configurer mes instances, d'envoyer le contenu du dossier `setup/` sur chacune d'entre elles, de créer les images et containers (ou de les remplacer s'ils existent déjà), et de prendre en compte le padding au niveau du port à exposer sur la machine hôte.
*NB: Le padding du port se fait au niveau de l'interface entre le container et l'host, mais chaque service est toujours mappé sur le port 80 quoi qu'il arrive.*
## Loadbalancer
Loadbalancer basique géré par Nginx
```nginx
upstream balancers {
server 172.28.100.64;
server 172.28.100.77;
server 172.28.100.22;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
proxy_pass http://balancers;
}
}
```
## Monitoring
Le `node_exporter` est directement installé sur le loadbalancer manuellement (pour accèder facilement à toutes les métriques de l'host).
Prometheus et Grafana en revanche sont respectivement lancés grâce à un script `run.sh` dans `/home/ubuntu/dac/prometheus` et `/home/ubuntu/dac/grafana`, qui ne fait que créer un container à partir des images officielles.
Une fois lancés, tous les services sont disponibles aux ports par défaut :
* 3000 ➜ Grafana
* 9100 ➜ node_exporter
* 9090 ➜ Prometheus
\ No newline at end of file
upstream balancers {
server 172.28.100.64;
server 172.28.100.77;
server 172.28.100.22;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
proxy_pass http://balancers;
}
}
\ No newline at end of file
#bin/bash
# Globals
HEAD="\033[0;36m[DAC - DEPLOY]\033[0m"
CONTEXT="./setup"
# Instances to deploy & port padding
mapfile -t instances < ./instances
padding=1
for i in "${instances[@]}"
do
printf "$HEAD Deploying instance on $i\n"
HOST="ubuntu@$i"
# Creates dac directory if necessary
ssh $HOST mkdir dac > /dev/null 2>&1
# Copy files to remote server
printf "$HEAD Copying files to remote server...\n"
scp -i ~/.ssh/id_rsa "$CONTEXT/setup.sh" "$CONTEXT/Dockerfile" "$HOST:/home/ubuntu/dac"
# Execute deploy
printf "$HEAD Deploy completed, now starting setup...\n"
ssh -t $HOST "cd dac/ && sudo ./setup.sh $padding"
# Increment port padding
((padding++))
done
#bin/bash
sudo docker run --rm -d --name grafana -p 3000:3000 grafana/grafana
\ No newline at end of file
172.28.100.64
172.28.100.77
172.28.100.22
\ No newline at end of file
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['172.28.100.186:9100']
- job_name: 'gomez-1-2'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['172.28.100.64']
- job_name: 'gomez-1-3'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['172.28.100.77']
- job_name: 'gomez-1-4'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['172.28.100.22']
\ No newline at end of file
#bin/bash
# Start prometheus
sudo docker run --rm -d --name prometheus -p 9090:9090 \
-v /home/ubuntu/dac/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
\ No newline at end of file
FROM python:3.7-alpine
COPY index.html /
\ No newline at end of file
#bin/bash
# Globals
HEAD="\033[1;32m[DAC - SETUP]\033[0m"
NAME=webserver
PORT_PADDING=$((8000 + $1))
# Generate html file containing hostname
echo "<h1>$(hostname)</h1>" > ./index.html
# Stop running container
existingContainer=$(sudo docker ps -aqf "name=$NAME")
if [ -n "${existingContainer}" ]
then
printf "$HEAD Webserver's container ($existingContainer) already running, stoping it...\n"
sudo docker stop "$existingContainer"
fi
# Remove existing image
existingImage=$(sudo docker images -q $NAME)
if [ -n "${existingImage}" ]
then
printf "$HEAD Webserver's image ($existingImage) already exists, removing it...\n"
sudo docker rmi "$existingImage"
fi
# Setup port padding
echo >> ./Dockerfile && echo "EXPOSE $PORT_PADDING" >> ./Dockerfile
echo "CMD python -m http.server $PORT_PADDING" >> ./Dockerfile
# Build image
printf "$HEAD Building image...\n"
sudo docker build -t $NAME .
# Run container
printf "$HEAD Starting container...\n"
sudo docker run --rm -d --name $NAME -p "80:$PORT_PADDING" $NAME
# All done
printf "$HEAD All done, closing connection, bye !\n"
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment