Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
---
|
|
|
|
- name: install ansible requirements - ubuntu xenial
|
|
|
|
hosts: ubuntu[1..2]
|
|
|
|
gather_facts: false
|
|
|
|
tasks:
|
|
|
|
- name: install python2 (vagrant images seem to come with python3 only)
|
2018-05-12 00:38:30 +00:00
|
|
|
raw: apt update && apt install -y python python-apt
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
|
|
|
|
- name: install ansible requirements - ubuntu bionic
|
|
|
|
hosts: ubuntu[3..4]
|
|
|
|
gather_facts: false
|
|
|
|
tasks:
|
|
|
|
- name: install python3-apt
|
|
|
|
raw: apt install -y python3-apt
|
|
|
|
|
|
|
|
- name: install ansible requirements - freebsd
|
|
|
|
hosts: freebsd*
|
|
|
|
gather_facts: false
|
|
|
|
tasks:
|
2018-05-12 00:38:30 +00:00
|
|
|
- raw: pkg update && pkg install --yes python
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
|
|
|
|
- name: install ubuntu1 node as ansible control machine
|
|
|
|
hosts: ubuntu1
|
|
|
|
tasks:
|
2018-05-12 11:11:30 +00:00
|
|
|
- name: install pip
|
|
|
|
apt:
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
name:
|
|
|
|
- python3-pip
|
2018-05-12 11:11:30 +00:00
|
|
|
- name: install/upgrade Python tools
|
|
|
|
pip:
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
name:
|
|
|
|
- pip
|
|
|
|
- setuptools
|
|
|
|
extra_args: --upgrade
|
2018-05-12 11:11:30 +00:00
|
|
|
- name: install ansible
|
|
|
|
pip:
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
name: ansible
|
|
|
|
version: 2.5.2
|
|
|
|
- become_user: vagrant
|
2018-05-12 11:11:30 +00:00
|
|
|
name: checkout experiments repo on controller node
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
git:
|
|
|
|
dest: ./ansible-experiments
|
|
|
|
repo: https://github.com/stationgroup/ansible-experiments
|
|
|
|
- become_user: vagrant
|
2018-05-12 11:11:30 +00:00
|
|
|
name: create ssh key for vagrant user
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
user:
|
|
|
|
name: vagrant
|
|
|
|
generate_ssh_key: true
|
2018-05-12 00:38:54 +00:00
|
|
|
ssh_key_bits: 4096
|
Update packages on ubuntu and freebsd
Fixes #1
The Vagrant file being used is a copy from
`https://github.com/stationgroup/vagrant-labs/tree/master/imperialspeculate`.
Two roles are being used: debian-upgrade (an upstream Galaxy role) and
freebsd-upgrade (a small role based on what was proposed in the comments of #1
and extended with proper support for check mode.)
The upgrade process is contained in the playbook os_upgrade.yml, which will
automatically create proper groups for Ubuntu and FreeBSD hosts. If unneeded,
this first play can be left out, and the target hosts: in the second play can be
replaced by the relevant groups you have in the inventory (e.g. ec2 tags.)
A local ansible.cfg is defined, and needed for these scripts to run out of the
box. This implies that all ansible commands must be run from the
`ansible-experiments/package_updates` folder.
A small script `setup-requirements` is provided, that initializes everything,
to be executed after the vagrant boxes came online. It will generate an
ssh-config for said vagrant boxes, download roles from galaxy, and make a
base-install for the hosts (installing python dependencies, ansible itself
on ubuntu1, a deploying an ssh key to all nodes to be used from the vagrant box
`ubuntu1`, as ansible controller machine.)
When deploying and setting up from the machine where vagrant runs, you need to
add some extra arguments:
`--ssh-extra-args "-F ./vagrant-ssh-config" --inventory hosts-vagrant`
to ansible execution. These are not necessary once running ansible from
`ubuntu1`.
The vagrant setup seems to have a provisioning bug, that kicks in with the
latest 18.04 Ubuntu. The FreeBSD boxes also experience a provisioning problem,
with the same result: the second, private network interface does not get
configured. As these interfaces are used to run ansible from `ubuntu1`, I could
not fully test the scripts from there.
2018-05-11 14:37:03 +00:00
|
|
|
ssh_key_file: .ssh/id_rsa
|
|
|
|
register: vagrant_control_user
|
|
|
|
|
|
|
|
- name: distribute vagrant@ubuntu1 ssh key
|
|
|
|
hosts: all
|
|
|
|
tasks:
|
|
|
|
- authorized_key:
|
|
|
|
key: '{{ hostvars.ubuntu1.vagrant_control_user.ssh_public_key }}'
|
|
|
|
user: vagrant
|
|
|
|
|