Technical Track Record
Case 03: EVE-NG Azure Terraform - Network Lab Provisioning
Briefing
Provisioning a functional EVE-NG lab on Azure manually means repeating the same network, VM, access, and bootstrap steps every time. I structured that flow as infrastructure automation so the environment can be brought up with a minimal topology, static public IP, and initial EVE-NG installation already coupled to provisioning, reducing operational friction for technical labs, proofs of concept, and individual validation environments.
Technical Deep Dive
The solution combines declarative Terraform to create the Resource Group, VNet, Subnet, NSG, NIC, and Ubuntu VM with `custom_data`, plus imperative guest bootstrap through `cloud-init` to update the OS, adjust GRUB, and execute the official EVE-NG installer. I also kept a complementary reprovisioning path via Ansible and a local operational layer with Azure CLI in `lab.sh` for start, stop, and status operations, prioritizing delivery speed without losing control over the lab lifecycle.
EVE-NG Provisioning Flow
Documentation
1. Scope analyzed
This report describes the technical architecture of the EVE-NG on Azure lab provisioning project implemented in this repository. The document was prepared from the current codebase, the Terraform automation present in the repo, the VM bootstrap flow through cloud-init, the complementary Ansible path, and the local operational scripts.
The analyzed scope covers:
- the root Terraform module responsible for creating Azure infrastructure;
- automatic Ubuntu VM bootstrap with EVE-NG Community installation;
- the alternative manual configuration flow through Ansible;
- the local operational layer based on Azure CLI and SSH;
- state artifacts and declared dependencies present in the repository.
Base files considered in this analysis:
providers.tfvariables.tfmain.tfoutputs.tfinstall_eve.ymllab.shhosts.ini.terraform.lock.hclterraform.tfstateterraform.tfstate.backupREADME.md
This project does not contain a traditional business application with its own API, database, or custom frontend. The architecture here is essentially Infrastructure as Code + system bootstrap + lab operations.
2. Architecture overview
The implemented architecture is simple, direct, and centered on a single Terraform module. The main flow is:
- the operator authenticates to Azure;
- Terraform creates the base infrastructure;
- the Linux VM boots with
cloud-init; cloud-initupdates the system, adjusts GRUB, and installs EVE-NG through an external script;- the operator accesses the lab over HTTP and SSH;
- the VM daily lifecycle is then managed through
lab.shusing Azure CLI.
Logical topology:
+-------------------------------------------------------------------+
| Operator workstation |
| |
| Terraform CLI Azure CLI Ansible (optional) |
| | | | |
+-------|-------------------|------------------------|---------------+
| | |
| ARM API | ARM API | SSH
v v v
+-------------------------------------------------------------------+
| Azure Subscription |
| |
| +-------------------- Resource Group --------------------------+ |
| | | |
| | VNet 10.0.0.0/16 | |
| | +-- Subnet 10.0.1.0/24 | |
| | | | |
| | +-- NIC ---- NSG (22/80 open) | |
| | \ | |
| | \---- Standard static Public IP | |
| | | |
| | Ubuntu 20.04 Gen2 Linux VM | |
| | - 64 GB Standard SSD disk | |
| | - operator public SSH key | |
| | - custom_data / cloud-init | |
| | | |
| +------------------------------------------------------------+ |
+-------------------------------------------------------------------+
|
| apt + wget
v
Official EVE-NG installer
https://www.eve-ng.net/focal/install-eve.shMacro responsibilities:
Terraform: declares and orchestrates Azure infrastructure.cloud-init: performs the initial OS bootstrap and installs EVE-NG on first boot.Ansible: provides an alternative reprovisioning path after the VM already exists.lab.sh: wraps basic operations for starting, stopping, and checking VM status.Azure: delivers compute, networking, public IP, and the control plane.
The main architectural characteristic of this project is the combination of declarative infrastructure with imperative provisioning inside the guest VM. Terraform creates the platform, but successful delivery of the final product depends on shell commands executed inside the machine.
3. Technology stack
Infrastructure and orchestration
- Terraform
1.14.7validated locally hashicorp/azurermprovider with constraint~> 3.0- Locked version in
.terraform.lock.hcl:3.117.1
Target platform
- Microsoft Azure
Resource GroupVirtual NetworkSubnetPublic IPNetwork Security GroupNetwork Interfaceazurerm_linux_virtual_machine
Operating system and bootstrap
- Ubuntu Server
20.04 LTS Gen2 cloud-initviacustom_data- Bash
apt-getwget- official EVE-NG Community script for
focal
Local operations
- Azure CLI
2.84.0 - OpenSSH
- Bash shell script in
lab.sh
Complementary automation
- Ansible
core 2.20.3 - static inventory in
hosts.ini
4. Component structure
4.1 Root Terraform module
The repository uses a single Terraform module in the root directory. There is no split into reusable modules, environments, dedicated workspaces, or remotely configured backend in code.
providers.tf
Role:
- declare the
azurermprovider; - fix the version range at
~> 3.0; - enable
features {}with default provider behavior.
Relevant decision:
- authentication is not declared in code; it depends on the operator environment or the CI/CD runtime.
variables.tf
Role:
- centralize minimal environment parameterization;
- define default values for naming, region, VM size, and administrative user.
Existing variables:
prefixlocationvm_sizeadmin_username
Important observation:
- the variable set is intentionally small and covers only the essentials. Ports, CIDRs, image, disk size, and security policy remain hardcoded in the module.
main.tf
Role:
- declare the full Azure infrastructure;
- inject the VM bootstrap using
custom_data.
Declared resources:
azurerm_resource_group.rgazurerm_virtual_network.vnetazurerm_subnet.subnetazurerm_public_ip.public_ipazurerm_network_security_group.nsgazurerm_network_interface.nicazurerm_network_interface_security_group_association.nsg_assocazurerm_linux_virtual_machine.vm
Relevant decisions:
- simple single-layer network topology with no further segmentation;
- NSG associated with the NIC rather than the subnet;
- Standard static public IP;
- inbound access allowed only on
22/TCPand80/TCP; - SSH authentication based on the operator local public key;
- single system disk with
64 GB; - fixed Ubuntu image
20_04-lts-gen2; - EVE-NG installation embedded as shell script inside
custom_data.
outputs.tf
Role:
- retrieve the effective public IP after VM creation;
- expose that value as output
eve_ng_public_ip.
Relevant decision:
- the output uses a dependent
data "azurerm_public_ip"block to query the provisioned address instead of reading the created resource attribute directly.
4.2 VM bootstrap and EVE-NG installation
There are two installation mechanisms in the repository.
Primary bootstrap through custom_data
The main path is defined in main.tf, inside azurerm_linux_virtual_machine.vm.
Executed steps:
apt-get updateapt-get upgrade -y- change
GRUB_CMDLINE_LINUX_DEFAULTtonet.ifnames=0 noquiet update-grub- download and execute the official EVE-NG installer
Architectural implications:
- the VM leaves
terraform applyalready intended to become ready for use; - final lab success depends on the internal bootstrap state, not only on Terraform success;
- provisioning of the final product is coupled to an external script downloaded at runtime.
Alternative path through Ansible (install_eve.yml)
The playbook is not called automatically by Terraform. It works as a complementary reprovisioning mechanism.
Relevant technical decisions:
gather_facts: noat the beginning to avoid premature failure;pre_tasksusingrawto install Python3.9;- later definition of
ansible_python_interpreter; - after Python bootstrap, normal use of
apt,replace, andshellmodules.
Tradeoff:
- there is duplicated logic between
cloud-initand Ansible; - the architecture supports two installation routes, but without a unified controller selecting one or reconciling both.
4.3 Local operational layer
lab.sh
Role:
- wrap common Azure operational commands;
- reduce the need to remember longer CLI calls.
Implemented commands:
startstopstatus
Behavior:
start: runsaz vm startand queries the public IP withaz vm show -dstop: runsaz vm deallocatestatus: queriesinstanceView.statuses
Important limitation:
- the script assumes fixed names:
eveng-lab-rgandeveng-lab-vm. Ifprefixchanges in Terraform, the script no longer matches the real infrastructure until manually updated.
hosts.ini
Role:
- provide a minimal inventory for the Ansible flow.
Characteristics:
- contains a fixed IP;
- defines
ansible_user=eveadmin; - disables
StrictHostKeyChecking.
This file is useful for quick manual operation, but it represents local state and can age poorly when the VM is recreated or when the administrative user changes.
4.4 Local state artifacts
The repository materializes an architecture with explicit local state.
Observed files:
.terraform.lock.hclterraform.tfstateterraform.tfstate.backup
Consequences:
- reproducing the project elsewhere requires care not to reuse someone else's state;
- the lack of a remote backend limits collaboration, CI/CD, and shared auditability;
- the repository currently already shows evidence of at least one prior
apply, including a real public IP output.
5. Infrastructure and domain model
5.1 Main entities
Azure authentication context
Represents the security principal executing Terraform and Azure CLI.
Supported forms in the current architecture:
- interactive session through
az login ARM_*environment variables for non-interactive execution
Resource group
Represents the logical container for all lab assets.
Business use:
- basic environment isolation;
- natural target for cleanup through
terraform destroy; - cost and organizational boundary.
Network plan
Represented by VNet, subnet, NIC, NSG, and Public IP.
Business use:
- connect the VM to Azure;
- expose administrative access through SSH;
- expose web access to EVE-NG.
Compute node
Represented by azurerm_linux_virtual_machine.
Business use:
- host EVE-NG Community;
- provide nested virtualization capability according to the chosen VM size;
- concentrate all lab logic inside the guest.
Installation flow
Represented by custom_data and, optionally, install_eve.yml.
Business use:
- transform a raw Ubuntu VM into a functioning network lab appliance.
Operational control
Represented by terraform, az, ssh, and ansible-playbook.
Business use:
- create, operate, recover, and destroy the lab.
5.2 Inventory of declared resources
azurerm_resource_group.rg
Role:
- group all environment resources.
azurerm_virtual_network.vnet
Role:
- define the
10.0.0.0/16address space.
azurerm_subnet.subnet
Role:
- segment the VNet into
10.0.1.0/24.
azurerm_public_ip.public_ip
Role:
- expose the VM externally with static allocation and Standard SKU.
azurerm_network_security_group.nsg
Role:
- filter inbound traffic.
Current rules:
SSHon22/TCPHTTPon80/TCP
azurerm_network_interface.nic
Role:
- connect the VM to the subnet and the public IP.
azurerm_network_interface_security_group_association.nsg_assoc
Role:
- attach the NSG to the NIC.
azurerm_linux_virtual_machine.vm
Role:
- host the operating system and execute EVE-NG bootstrap.
data.azurerm_public_ip.ip_info
Role:
- query the final IP address for output exposure.
5.3 Relationships
resource_group 1 --- 1 virtual_network
resource_group 1 --- 1 subnet
resource_group 1 --- 1 public_ip
resource_group 1 --- 1 network_security_group
resource_group 1 --- 1 network_interface
resource_group 1 --- 1 linux_virtual_machine
virtual_network 1 --- 1 subnet
subnet 1 --- 1 network_interface
public_ip 1 --- 1 network_interface
network_security_group 1 --- 1 network_interface
network_interface 1 --- 1 linux_virtual_machine
public_ip 1 --- 1 output data sourceObservations:
- the topology is a single-VM topology;
- there is no load balancer, availability set, separate data disk, or satellite service;
- the architecture is appropriate for an individual lab, not for a distributed service.
5.4 Naming and parameterization model
Implemented rules:
prefixcomposes names for RG, VNet, subnet, PIP, NSG, NIC, and VM;locationdefines the region for all resources;vm_sizedefines compute capacity;admin_usernamedefines the Linux administrative user.
Practical consequences:
- the module is easy to understand and instantiate;
- the parameterization is sufficient for a single lab;
- the local operational layer does not fully follow parameterization because
lab.shignores dynamicprefix.
6. Operational interfaces
Since the project has no application API of its own, the relevant interfaces are operational.
Terraform CLI
Responsibilities:
terraform initterraform planterraform applyterraform destroyterraform output
Use:
- creation and destruction of infrastructure;
- inspection of the public IP;
- reading local state.
Azure CLI
Responsibilities:
- authenticate to Azure;
- inspect the active subscription;
- start and deallocate the VM;
- obtain status and IP during operations.
Use:
- support
lab.sh; - manual operational verification outside Terraform.
SSH
Responsibilities:
- administrative access to the VM;
- verification of
cloud-init; - troubleshooting installation issues.
Use:
ssh <admin_username>@<public_ip>
Ansible
Responsibilities:
- reprovision or reinstall components inside the already created VM.
Use:
ansible-playbook -i hosts.ini install_eve.yml
HTTP for EVE-NG
Responsibilities:
- functional access to the lab through the browser.
Use:
http://<public_ip>/
Observation:
- the repository does not configure HTTPS or a dedicated reverse proxy.
7. Critical business flows
7.1 Operator workstation bootstrap
Flow:
- the operator installs Terraform, Azure CLI, and SSH;
- authenticates to Azure or exports
ARM_*; - ensures
~/.ssh/id_rsa.pubexists; - runs
terraform init.
Technical value:
- prepares the control workstation;
- shifts authentication responsibility to the environment instead of the code.
7.2 Initial lab provisioning
Flow:
- the operator runs
terraform planandterraform apply; - the
azurermprovider calls the Azure control plane; - networking resources are created;
- the NSG is attached to the NIC;
- the VM is created with SSH key and
custom_data; - the output queries the public IP;
- the operator receives
eve_ng_public_ip.
Attention point:
- a successful
terraform applydoes not automatically guarantee that EVE-NG has already finished installing inside the VM.
7.3 Internal VM bootstrap on first boot
Flow:
- the VM boots from the Ubuntu 20.04 image;
cloud-initexecutes the shell provided incustom_data;- the OS is updated;
- GRUB is adjusted;
- the EVE-NG installer is downloaded and executed;
- the operator accesses the VM over SSH and can wait with
cloud-init status --wait; - once the installation finishes, web access to EVE-NG becomes available.
Characteristics:
- strongly imperative bootstrap;
- dependency on outbound internet access;
- dependency on the behavior of the external official script.
7.4 Optional reprovisioning through Ansible
Flow:
- the operator updates
hosts.iniwith the correct IP; - runs
ansible-playbook -i hosts.ini install_eve.yml; - the playbook installs Python
3.9throughraw; - collects system facts;
- updates packages;
- adjusts GRUB;
- downloads and executes the EVE-NG installer again.
Technical value:
- provides a maintenance path without requiring a new
terraform apply.
Tradeoff:
- duplication with
cloud-initcan create operational drift regarding which mechanism should be treated as the canonical bootstrap source.
7.5 Day-to-day lab operation
Flow:
- the operator runs
./lab.sh statusto inspect current state; - uses
./lab.sh stopto deallocate the VM; - uses
./lab.sh startto power it back on; - the script queries and prints the public IP.
Technical value:
- reduces operational friction;
- turns Azure CLI into a simple operational interface.
7.6 Environment destruction
Flow:
- the operator runs
terraform destroy; - the provider removes the VM, NIC, NSG, public IP, subnet, VNet, and resource group;
- local state is updated to reflect the destruction.
Technical value:
- releases cost and avoids orphaned environments;
- keeps the lab aligned with the disposable IaC model.
8. Security, isolation, and risk surface
8.1 Responsibility boundaries
System boundaries are clear:
- the local workstation holds Azure access credentials;
- Azure hosts the resources;
- the VM hosts EVE-NG;
- the EVE-NG site provides the installation script at runtime.
This architecture simplifies the design, but increases the trust dependency on components outside this repository.
8.2 Network exposure
The current NSG opens:
22/TCPfor SSH80/TCPfor HTTP
Characteristics:
source_address_prefix = "*"- no source IP restriction
- no HTTPS
Consequence:
- the attack surface is broad for a publicly exposed lab.
8.3 Authentication and secrets
Observed points:
- the provider depends on external authentication, not hardcoded credentials;
- the VM uses the local public SSH key;
- there is no Key Vault, Managed Identity, or guest secret injection;
hosts.inidisables strict SSH host key verification.
Critical reading:
- for a personal lab, this approach is acceptable;
- for a team or real pipeline, stronger identity and secret handling controls are still missing.
8.4 Bootstrap integrity
The main path executes:
wget -O - https://www.eve-ng.net/focal/install-eve.sh | bash -iImplicit risk:
- the code effectively executed inside the VM is not versioned in this repository;
- changes in the remote script can alter behavior, break installation, or introduce supply chain risk.
8.5 Local state and sensitive data
Using local terraform.tfstate introduces:
- real resource IDs persisted on the local filesystem;
- possible improper sharing if the repository is distributed without cleanup;
- operational conflict risk between different operators.
There is no remote backend, collaborative locking, or declared Terraform-side state encryption strategy.
9. Observability and operations
9.1 Built-in observability
The project has minimal observability, but enough for a small lab.
Available sources:
terraform outputfor the public IP;terraform state listfor environment inventory;lab.sh statusthrough Azure CLI;- SSH to inspect
cloud-init; - Azure Portal and CLI for manual troubleshooting.
9.2 Logs and diagnostics
The repository does not configure:
- Azure Monitor;
- Log Analytics;
- explicit Boot Diagnostics;
- centralized log collection;
- application-level EVE-NG health checks after bootstrap.
In practice, troubleshooting depends on:
- Terraform output;
- Azure CLI responses;
cloud-initlogs inside the VM;- HTTP and SSH reachability.
9.3 Operational support files
Observed support files:
hosts.ini.terraform.lock.hclterraform.tfstateterraform.tfstate.backup
They help local operation, but also make it clear that the current architecture is still centered on a single operator and a specific workstation.
9.4 Limits of current observability
Concrete limits:
- there is no automated confirmation that EVE-NG finished starting after
apply; - there is no alerting for remote installer failure;
- there is no bootstrap duration metric;
- there is no telemetry for cost, usage, or lab availability.
10. Declared dependencies
Terraform
Declared dependencies:
- provider
registry.terraform.io/hashicorp/azurerm - constraint
~> 3.0 - lock file at
3.117.1
Local external tools
Operational dependencies:
- Terraform CLI
- Azure CLI
- OpenSSH
- Ansible for the optional path
Guest dependencies
Dependencies executed inside the VM:
- Ubuntu
aptrepositories wget- remote EVE-NG script
Cloud dependencies
The automation assumes availability of:
- VM quota compatible with the chosen size;
- regional support for
vm_size; - nested virtualization capability for the selected SKU.
11. Infra, build, and local deployment
11.1 Deployment structure
Deployment is entirely local and operator-driven.
There are no:
- CI/CD pipeline YAML files;
- GitHub Actions workflows;
- Azure DevOps pipelines;
- separate Terraform modules by layer;
- independent build environment.
That means that although the flow may commonly be described as a "pipeline", the current implementation is more precisely a local provisioning automation.
11.2 Infrastructure lifecycle
The lifecycle is organized around:
initto resolve the provider and lock file;planto calculate changes;applyto provision;outputto discover the IP;destroyto tear everything down.
11.3 Software deployment inside the VM
There is no image baking or ready-made appliance artifact.
Software deployment happens:
- at boot time, through
cloud-init; - on demand, through Ansible.
Benefit:
- repository simplicity.
Cost:
- longer bootstrap time;
- greater variability across executions;
- stronger dependency on internet access and the remote script.
11.4 Evidence of current runtime
The directory contains:
terraform.tfstateterraform.tfstate.backup- local output of
eve_ng_public_ip
This shows that the automation has already been applied at least once and reinforces that the project currently works with persisted local state.
12. Tests and current coverage
12.1 Terraform validation
It was possible to validate locally:
terraform validatewith a successful result
This validation confirms:
- valid syntax;
- consistent internal references;
- acceptable schema for the installed provider.
12.2 Absence of automated test suite
The project does not contain:
- unit tests;
- integration tests;
terratest;terraform test;- Ansible linting;
- automated shell script validation.
12.3 Verification gaps
The following areas have no automated guarantee:
- effective EVE-NG installation after boot;
vm_sizecompatibility with the chosen region;- Ansible playbook idempotence;
- consistency between Terraform
prefixand hardcoded names inlab.sh; - NSG security hardening;
- actual readiness of the EVE-NG web interface.
12.4 Current real-world test model
In practice, the real test model for this project is still mostly manual:
- run
terraform apply; - obtain the IP;
- wait for
cloud-init; - open EVE-NG over HTTP;
- validate SSH;
- test
lab.sh stop/start/status.
13. Configuration by environment
13.1 Central variables
Central file:
variables.tf
Current parameters:
prefixlocationvm_sizeadmin_username
13.2 Authentication by environment
Behavior changes according to the execution environment:
- local environment:
az login - pipeline or automation:
ARM_*variables
The code does not explicitly differentiate dev, qa, and prod. That segmentation depends on operational discipline and on .tfvars files external to the repository.
13.3 Operational impact of parameters
prefix: changes resource names, but does not updatelab.shautomaticallylocation: affects SKU availability and latencyvm_size: determines EVE-NG viability and costadmin_username: changes SSH access and Ansible inventory expectations
13.4 Parameters not externalized
Still fixed in code:
- VNet and subnet CIDRs
- open NSG ports
- disk size
- Ubuntu image
- EVE-NG installer URL
This simplifies V1, but reduces flexibility across environments.
14. Alignment with the project objective and critical reading of V1
14.1 Objectives clearly achieved
Achieved by the current implementation:
- automated creation of an EVE-NG lab on Azure;
- provisioning of networking, public IP, and Linux VM;
- initial automated EVE-NG installation;
- exposure of public IP as output;
- basic start, stop, and status operations;
- alternative reinstall path through Ansible.
14.2 Objectives achieved partially or with tradeoffs
Pipeline
The project achieves provisioning automation, but does not implement a full CI/CD pipeline. It still lacks the orchestration layer for pipelines, approvals, remote backend, and environment segregation.
Reprovisioning
There is an Ansible playbook, but it is not integrated into the main flow. The operator must decide manually when to use Terraform and when to use Ansible.
Recurring operations
lab.sh covers the basics, but still depends on hardcoded names and does not inspect Terraform state to dynamically discover RG and VM names.
14.3 Real technical gaps in V1
- lack of remote Terraform backend;
- lack of reusable modules;
- lack of automated EVE-NG readiness validation;
- lack of default security hardening;
- lack of reconciliation between
cloud-initand Ansible; - lack of dynamic Ansible inventory;
- lack of stronger native observability.
15. Risks, limitations, and tradeoffs
15.1 Single module in the root directory
Benefits:
- simple reading;
- fast onboarding;
- low initial complexity.
Costs:
- less reuse;
- limited architectural scalability;
- mixed concerns of networking, compute, and bootstrap in a single main file.
15.2 Dependency on external remote script
This is the main bootstrap tradeoff.
- advantage: fast and simple EVE-NG installation;
- cost: lower determinism and higher supply chain risk.
15.3 Local state as source of truth
For individual use, this is a pragmatic decision.
For collaborative use, it implies:
- conflict risk between operators;
- low shared traceability;
- lack of remote locking.
15.4 Drift between installation mechanisms
cloud-init and install_eve.yml try to achieve a similar outcome through different paths.
Consequences:
- higher cognitive cost;
- possibility of future divergence;
- operational uncertainty over which mechanism should be treated as canonical.
15.5 Public exposure security
Benefits:
- simple access to the lab;
- less friction in use.
Costs:
- SSH and HTTP exposed to any source;
- no HTTPS;
- posture unsuitable for more sensitive scenarios.
15.6 Dependency on external platform variables
Deployment success depends on factors outside the repository:
- subscription quota;
- regional support for the selected VM size;
- availability of nested virtualization;
- stability of the official installer URL.
16. Evolution recommendations
16.1 Short term
- move Terraform state to a remote backend;
- parameterize
lab.shfromprefixor from Terraform state itself; - restrict
source_address_prefixin the NSG; - generate
hosts.inidynamically fromterraform output; - add automated verification of
cloud-initcompletion and HTTP availability afterapply.
16.2 Medium term
- split networking, compute, and bootstrap into Terraform modules;
- turn the bootstrap script into a versioned template;
- choose a canonical path between
cloud-initand Ansible; - add a CI/CD pipeline with separate
planandapplystages; - introduce dynamic Ansible inventory or eliminate the fixed inventory file.
16.3 Long term
- adopt image baking to reduce bootstrap time;
- integrate Azure Monitor / Log Analytics observability;
- support multiple environments with consistent naming and backend design;
- harden access to EVE-NG with TLS, source controls, and secret management;
- evolve from local automation into a truly shareable lab platform.
17. Technical conclusion
The project presents an architecture that is coherent with a clear goal: quickly bring up a functional EVE-NG lab on Azure with the minimum number of components. The design is correct in keeping infrastructure small, bootstrap direct, and local operation simple. For an individual lab, technical demonstration, or proof of concept, the approach is defensible and functional.
The weaknesses appear when the project is read as the base for a shared pipeline or a more durable environment. Local state, dependence on a remote script, broad network exposure, duplication between cloud-init and Ansible, and the lack of automated EVE-NG readiness validation show that the current implementation is still closer to an operational lab V1 than to a mature automation platform.
In short, the current architecture is good for delivering speed and simplicity, but it still needs hardening and modularization to support collaboration, strong repeatability, and environment governance at a larger scale.