Reorganize repo, enrollment share taxonomy, Blancco USB-build fixes, v4.10 PPKGs

Workstation reorganization:
- All build/deploy/helper scripts moved into scripts/ (paths updated to use
  REPO_ROOT instead of SCRIPT_DIR so they resolve sibling dirs from the new
  depth)
- New config/ directory placeholder for site-specific overrides
- Removed stale: mok-keys/, test-vm.sh, test-lab.sh, setup-guide-original.txt,
  unattend/ (duplicate of moved playbook/FlatUnattendW10.xml)
- README.md and SETUP.md structure listings updated, dead "Testing with KVM"
  section removed
- .claude/ gitignored

Enrollment share internal taxonomy (forward-looking; existing servers
unaffected since they keep their current boot.wim with flat paths):
- Single SMB share kept (WinPE only mounts one Y: drive), but content now
  organised into ppkgs/, scripts/, config/, shopfloor-setup/, pre-install/{bios,
  installers}, installers-post/cmm/, blancco/, logs/
- README.md deployed to share root explaining each subdir
- New playbook tasks deploy site-config.json + wait-for-internet.ps1 +
  migrate-to-wifi.ps1 explicitly (were ad-hoc on legacy servers)
- BIOS subdir moved into pre-install/bios/, preinstall/ renamed to pre-install/
- startnet.cmd + startnet-template.cmd updated with new Y:\subdir\ paths
- Bumped GCCH PPKG references v4.9 -> v4.10

Blancco USB-build fixes (so next fresh USB install boots Blancco end-to-end
without the manual fixup we did against GOLD):
- grub-blancco.cfg: kernel/initrd switched HTTP -> TFTP (GRUB's HTTP module
  times out on multi-MB files); added modprobe.blacklist=iwlwifi,iwlmvm,btusb
  (WiFi drivers hang udev on Intel business PCs)
- grubx64.efi rebuilt from updated cfg
- Playbook task added to create /srv/tftp/blancco/ symlinks pointing at the
  HTTP-served binaries

run-enrollment.ps1: OOBEComplete is now set AFTER PPKG install (Win11 22H2+
hangs indefinitely if OOBEComplete is set before the bulk-enrollment PPKG runs).

Also includes deploy-bios.sh / pull-bios.sh / busybox-static / models.txt
that were sitting untracked at the repo root.
This commit is contained in:
cproudlock
2026-04-14 16:01:02 -04:00
parent d14c240b48
commit d6776f7c7f
26 changed files with 380 additions and 824 deletions

2
.gitignore vendored
View File

@@ -53,3 +53,5 @@ secrets.md
*.ppkg
enrollment/
drivers-staging/
bios-staging/
.claude/

View File

@@ -54,7 +54,7 @@ Client PXE boot (UEFI Secure Boot)
### Step 1: Download Offline Packages
```bash
./download-packages.sh
./scripts/download-packages.sh
```
Downloads all .deb packages and Python wheels for offline installation (~140 MB of debs, ~20 MB of wheels).
@@ -62,7 +62,7 @@ Downloads all .deb packages and Python wheels for offline installation (~140 MB
### Step 2: Prepare Boot Tools (optional)
```bash
./prepare-boot-tools.sh /path/to/blancco.iso /path/to/clonezilla.zip /path/to/memtest.bin
./scripts/prepare-boot-tools.sh /path/to/blancco.iso /path/to/clonezilla.zip /path/to/memtest.bin
```
Extracts and configures boot tool files (Blancco, Clonezilla, Memtest86+). Automatically patches Blancco's config.img to auto-save erasure reports to the PXE server's Samba share.
@@ -70,7 +70,7 @@ Extracts and configures boot tool files (Blancco, Clonezilla, Memtest86+). Autom
### Step 3: Build the USB
```bash
sudo ./build-usb.sh /dev/sdX /path/to/ubuntu-24.04-live-server-amd64.iso
sudo ./scripts/build-usb.sh /dev/sdX /path/to/ubuntu-24.04-live-server-amd64.iso
```
Creates a bootable USB with two partitions:
@@ -158,50 +158,32 @@ pxe-server/
│ └── audit.html # Activity audit log
├── docs/
│ └── shopfloor-display-imaging-guide.md # End-user imaging guide
├── unattend/
│ └── FlatUnattendW10.xml # Windows unattend.xml template
├── boot-tools/ # Extracted boot tool files (gitignored)
│ ├── blancco/ # Blancco Drive Eraser (Arch Linux-based)
│ ├── blancco/ # Blancco Drive Eraser
│ ├── clonezilla/ # Clonezilla Live
│ └── memtest/ # Memtest86+
├── boot-files/ # WinPE boot files (boot.wim, wimboot, ipxe.efi, BCD)
├── offline-packages/ # .deb files (gitignored, built by download-packages.sh)
├── pip-wheels/ # Python wheels (gitignored, built by download-packages.sh)
├── download-packages.sh # Downloads offline .debs + pip wheels
├── build-usb.sh # Builds the installer USB (2-partition)
├── prepare-boot-tools.sh # Extracts and patches boot tool files
├── build-proxmox-iso.sh # Builds self-contained Proxmox installer ISO
├── test-vm.sh # KVM test environment for validation
├── test-lab.sh # Full PXE lab with server + client VMs
├── enrollment/ # PPKGs and run-enrollment.ps1 (gitignored)
├── bios-staging/ # Dell BIOS update binaries (gitignored)
├── scripts/ # Build, deploy, and helper scripts
├── build-usb.sh # Builds the installer USB (2-partition)
│ ├── build-proxmox-iso.sh # Builds self-contained Proxmox installer ISO
│ ├── prepare-boot-tools.sh # Extracts and patches boot tool files
│ ├── download-packages.sh # Downloads offline .debs + pip wheels
│ ├── download-drivers.py # Downloads Dell drivers directly from dell.com
│ ├── deploy-bios.sh # Pushes BIOS updates to enrollment share
│ ├── pull-bios.sh # Pulls BIOS binaries from upstream cache
│ ├── sync_hardware_models.py # Syncs hardware model configs across images
│ ├── Upload-Image.ps1 # Windows: upload MCL cache to PXE via SMB
│ └── Download-Drivers.ps1 # Windows: download hardware drivers from GE CDN
├── config/ # Site-specific configuration overrides
├── startnet-template.cmd # startnet.cmd template (synced with playbook copy)
├── Download-Drivers.ps1 # Download hardware drivers from GE CDN (Windows)
── Upload-Image.ps1 # Upload MCL cache to PXE server via SMB (Windows)
├── download-drivers.py # Download Dell drivers directly from dell.com
├── sync_hardware_models.py # Sync hardware model configs across images
├── SETUP.md # Detailed setup guide
└── setup-guide-original.txt # Original manual setup notes (reference)
├── README.md # This file
── SETUP.md # Detailed setup guide
```
## Testing with KVM
A test VM script is included for validating the full provisioning pipeline without dedicated hardware:
```bash
# Download Ubuntu Server ISO
wget -O ~/Downloads/ubuntu-24.04.3-live-server-amd64.iso \
https://releases.ubuntu.com/noble/ubuntu-24.04.3-live-server-amd64.iso
# Launch test VM (requires libvirt/KVM)
sudo ./test-vm.sh ~/Downloads/ubuntu-24.04.3-live-server-amd64.iso
# Watch install progress
sudo virsh console pxe-test
# Clean up when done
sudo ./test-vm.sh --destroy
```
The test VM creates an isolated libvirt network (10.9.100.0/24) and runs the full autoinstall + Ansible provisioning.
## Proxmox Deployment
A single ISO can be built for deploying the PXE server in a Proxmox VM:
@@ -213,7 +195,7 @@ A single ISO can be built for deploying the PXE server in a Proxmox VM:
sudo apt install xorriso p7zip-full
# Build the installer ISO
./build-proxmox-iso.sh /path/to/ubuntu-24.04-live-server-amd64.iso
./scripts/build-proxmox-iso.sh /path/to/ubuntu-24.04-live-server-amd64.iso
```
This creates `pxe-server-proxmox.iso` containing the Ubuntu installer, autoinstall config, all offline packages, the Ansible playbook, webapp, and boot tools.

View File

@@ -46,7 +46,7 @@ Client PXE boot
### Step 1: Download Offline Packages (one-time, requires internet)
```bash
./download-packages.sh
./scripts/download-packages.sh
```
Downloads all .deb packages (ansible, dnsmasq, apache2, samba, wimtools, etc.) into `offline-packages/` and Python wheels (flask, lxml) into `pip-wheels/`. Approximately 252 packages (~140 MB) + 8 Python wheels.
@@ -61,7 +61,7 @@ Downloads all .deb packages (ansible, dnsmasq, apache2, samba, wimtools, etc.) i
### Step 2: Prepare Boot Tools (optional)
```bash
./prepare-boot-tools.sh /path/to/blancco.iso /path/to/clonezilla.zip /path/to/memtest.bin
./scripts/prepare-boot-tools.sh /path/to/blancco.iso /path/to/clonezilla.zip /path/to/memtest.bin
```
Extracts boot files for Blancco, Clonezilla, and Memtest86+ into the `boot-tools/` directory. Automatically patches Blancco's `config.img` to auto-save erasure reports to the PXE server's Samba share.
@@ -70,10 +70,10 @@ Extracts boot files for Blancco, Clonezilla, and Memtest86+ into the `boot-tools
```bash
# Basic — server only (import WinPE images later)
sudo ./build-usb.sh /dev/sdX /path/to/ubuntu-24.04-live-server-amd64.iso
sudo ./scripts/build-usb.sh /dev/sdX /path/to/ubuntu-24.04-live-server-amd64.iso
# With WinPE images bundled (single USB, larger drive needed)
sudo ./build-usb.sh /dev/sdX /path/to/ubuntu-24.04.iso /path/to/winpe-images
sudo ./scripts/build-usb.sh /dev/sdX /path/to/ubuntu-24.04.iso /path/to/winpe-images
```
This creates a bootable USB with:
@@ -175,27 +175,30 @@ pxe-server/
│ └── templates/ # Jinja2 HTML templates (10 pages)
├── docs/
│ └── shopfloor-display-imaging-guide.md # End-user imaging guide
├── unattend/
│ └── FlatUnattendW10.xml # Windows unattend.xml template
├── boot-tools/ # Extracted boot files (gitignored, built by prepare-boot-tools.sh)
│ ├── blancco/ # Blancco Drive Eraser
│ ├── clonezilla/ # Clonezilla Live
│ └── memtest/ # Memtest86+
├── boot-files/ # WinPE boot files (boot.wim, wimboot, ipxe.efi, BCD)
├── offline-packages/ # .deb files (gitignored, built by download-packages.sh)
├── pip-wheels/ # Python wheels (gitignored, built by download-packages.sh)
├── download-packages.sh # Downloads all offline packages
├── build-usb.sh # Builds the 2-partition installer USB
├── prepare-boot-tools.sh # Extracts/patches boot tools from ISOs
├── build-proxmox-iso.sh # Builds self-contained Proxmox installer ISO
├── test-vm.sh # KVM test environment
├── test-lab.sh # Full PXE lab with server + client VMs
├── enrollment/ # PPKGs and run-enrollment.ps1 (gitignored)
├── bios-staging/ # Dell BIOS update binaries (gitignored)
├── scripts/ # Build, deploy, and helper scripts
├── build-usb.sh # Builds the 2-partition installer USB
│ ├── build-proxmox-iso.sh # Builds self-contained Proxmox installer ISO
│ ├── prepare-boot-tools.sh # Extracts/patches boot tools from ISOs
│ ├── download-packages.sh # Downloads all offline packages
│ ├── download-drivers.py # Downloads Dell drivers from dell.com
│ ├── deploy-bios.sh # Pushes BIOS updates to enrollment share
│ ├── pull-bios.sh # Pulls BIOS binaries from upstream cache
│ ├── sync_hardware_models.py # Syncs hardware model configs across images
│ ├── Upload-Image.ps1 # Windows: upload MCL cache to PXE via SMB
│ └── Download-Drivers.ps1 # Windows: download hardware drivers from GE CDN
├── config/ # Site-specific configuration overrides
├── startnet-template.cmd # startnet.cmd template (synced with playbook copy)
├── Download-Drivers.ps1 # Download hardware drivers from GE CDN (Windows)
├── Upload-Image.ps1 # Upload MCL cache to PXE server via SMB (Windows)
├── download-drivers.py # Download Dell drivers directly from dell.com
├── sync_hardware_models.py # Sync hardware model configs across images
├── README.md # Project overview
└── setup-guide-original.txt # Original manual setup notes (reference)
└── SETUP.md # Detailed setup guide (this file)
```
## Image Types

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
playbook/busybox-static Executable file

Binary file not shown.

View File

@@ -300,32 +300,72 @@
state: directory
mode: '0777'
- name: "Create enrollment packages directory"
- name: "Create enrollment share with internal taxonomy"
file:
path: /srv/samba/enrollment
path: "/srv/samba/enrollment/{{ item }}"
state: directory
mode: '0777'
loop:
- ""
- ppkgs
- scripts
- config
- shopfloor-setup
- pre-install
- pre-install/bios
- pre-install/installers
- installers-post
- installers-post/cmm
- blancco
- logs
- name: "Deploy PPKG enrollment packages to enrollment share"
- name: "Deploy enrollment share README"
copy:
dest: /srv/samba/enrollment/README.md
mode: '0644'
content: |
# Enrollment Share Layout
Single SMB share mounted by WinPE as Y: during imaging. Subdir layout:
- ppkgs/ GCCH bulk-enrollment PPKGs
- scripts/ run-enrollment.ps1, wait-for-internet.ps1, migrate-to-wifi.ps1
- config/ site-config.json, FlatUnattendW10*.xml, per-site overrides
- shopfloor-setup/ Per-PC-type post-imaging scripts
- pre-install/ WinPE-phase content (bios/, installers/, preinstall.json)
- installers-post/ Post-OOBE app installers (cmm/PCDMIS, etc.)
- blancco/ Blancco custom images / configs
- logs/ Client log uploads
- name: "Deploy PPKG enrollment packages to ppkgs/"
shell: |
set +e
# Copy any whole PPKGs (small enough to fit on FAT32)
cp -f {{ usb_root }}/enrollment/*.ppkg /srv/samba/enrollment/ 2>/dev/null
cp -f {{ usb_root }}/enrollment/*.ppkg /srv/samba/enrollment/ppkgs/ 2>/dev/null
# Reassemble any split files (foo.ppkg.part.00, .01, ... -> foo.ppkg)
for first in {{ usb_root }}/enrollment/*.part.00; do
[ -e "$first" ] || continue
base="${first%.part.00}"
name="$(basename "$base")"
echo "Reassembling $name from chunks..."
cat "${base}.part."* > "/srv/samba/enrollment/$name"
cat "${base}.part."* > "/srv/samba/enrollment/ppkgs/$name"
done
ls -lh /srv/samba/enrollment/*.ppkg 2>/dev/null
ls -lh /srv/samba/enrollment/ppkgs/*.ppkg 2>/dev/null
ignore_errors: yes
- name: "Deploy run-enrollment.ps1 to enrollment share"
- name: "Deploy enrollment scripts to scripts/"
copy:
src: "{{ usb_mount }}/shopfloor-setup/run-enrollment.ps1"
dest: /srv/samba/enrollment/run-enrollment.ps1
src: "{{ item.src }}"
dest: "/srv/samba/enrollment/scripts/{{ item.dest }}"
mode: '0644'
loop:
- { src: "{{ usb_mount }}/shopfloor-setup/run-enrollment.ps1", dest: "run-enrollment.ps1" }
- { src: "{{ usb_mount }}/wait-for-internet.ps1", dest: "wait-for-internet.ps1" }
- { src: "{{ usb_mount }}/migrate-to-wifi.ps1", dest: "migrate-to-wifi.ps1" }
ignore_errors: yes
- name: "Deploy site-config.json to config/"
copy:
src: "{{ usb_mount }}/shopfloor-setup/site-config.json"
dest: /srv/samba/enrollment/config/site-config.json
mode: '0644'
ignore_errors: yes
@@ -363,43 +403,28 @@
directory_mode: '0755'
ignore_errors: yes
- name: "Create preinstall bundle directory on enrollment share"
file:
path: "{{ item }}"
state: directory
mode: '0755'
loop:
- /srv/samba/enrollment/preinstall
- /srv/samba/enrollment/preinstall/installers
- name: "Deploy preinstall.json (installer binaries staged separately)"
- name: "Deploy preinstall.json to pre-install/"
copy:
src: "{{ usb_mount }}/preinstall/preinstall.json"
dest: /srv/samba/enrollment/preinstall/preinstall.json
dest: /srv/samba/enrollment/pre-install/preinstall.json
mode: '0644'
ignore_errors: yes
- name: "Create BIOS update directory on enrollment share"
file:
path: /srv/samba/enrollment/BIOS
state: directory
mode: '0755'
- name: "Deploy BIOS check script and manifest"
- name: "Deploy BIOS check script and manifest to pre-install/bios/"
copy:
src: "{{ usb_mount }}/shopfloor-setup/BIOS/{{ item }}"
dest: /srv/samba/enrollment/BIOS/{{ item }}
dest: "/srv/samba/enrollment/pre-install/bios/{{ item }}"
mode: '0644'
loop:
- check-bios.cmd
- models.txt
ignore_errors: yes
- name: "Deploy BIOS update binaries from USB"
- name: "Deploy BIOS update binaries from USB to pre-install/bios/"
shell: >
if [ -d "{{ usb_root }}/bios" ]; then
cp -f {{ usb_root }}/bios/*.exe /srv/samba/enrollment/BIOS/ 2>/dev/null || true
count=$(find /srv/samba/enrollment/BIOS -name '*.exe' | wc -l)
cp -f {{ usb_root }}/bios/*.exe /srv/samba/enrollment/pre-install/bios/ 2>/dev/null || true
count=$(find /srv/samba/enrollment/pre-install/bios -name '*.exe' | wc -l)
echo "Deployed $count BIOS binaries"
else
echo "No bios/ on USB - skipping"
@@ -668,6 +693,29 @@
remote_src: yes
mode: '0644'
- name: "Create TFTP blancco directory"
file:
path: "{{ tftp_dir }}/blancco"
state: directory
owner: nobody
group: nogroup
mode: '0755'
- name: "Create TFTP symlinks for Blancco kernel/initrd (GRUB HTTP times out on large files; TFTP is reliable)"
file:
src: "{{ web_root }}/blancco/{{ item }}"
dest: "{{ tftp_dir }}/blancco/{{ item }}"
state: link
force: yes
owner: nobody
group: nogroup
loop:
- vmlinuz-bde-linux
- initramfs-bde-linux.img
- intel-ucode.img
- amd-ucode.img
- config.img
- name: "Build Ubuntu kernel modules tarball for Blancco"
shell: |
set -e

View File

@@ -0,0 +1,47 @@
# ModelSubstring|BIOSFile|Version
13 Plus PB13250|Dell_Pro_PA13250_PA14250_PB13250_PB14250_PB16250_2.8.1.exe|2.8.1
14 MC14250|Dell_Pro_Max_MC16250_MC14250_1.9.0.exe|1.9.0
14 PC14250|Dell_Pro_PC14250_PC16250_1.7.0.exe|1.7.0
14 Premium MA14250|Dell_Pro_Max_MA14250_MA16250_1.7.1.exe|1.7.1
16 Plus MB16250|Dell_Pro_Max_MB16250_MB18250_2.2.2.exe|2.2.2
24250|Dell_Pro_Plus_QB24250_QC24250_QC24251_1.9.2.exe|1.9.2
5430|Latitude_5430_7330_Rugged_1.41.0.exe|1.41.0
7090|OptiPlex_7090_1.40.0.exe|1.40.0
7220 Rugged|Latitude_7220_Rugged_Extreme_Tablet_1.50.0.exe|1.50.0
7230 Rugged Extreme Tablet|Latitude_7230_1.30.1.exe|1.30.1
7320 2-in-1 Detachable|Dell_Latitude_7320_Detachable_1.45.0_64.exe|1.45.0
7350 Detachable|Latitude_7350_Detachable_1.9.1.exe|1.9.1
7400 AIO|Latitude_7X00_1.43.0.exe|1.43.0
AIO Plus 7410|Latitude_7X10_1.43.0.exe|1.43.0
AIO Plus 7420|Latitude_7X20_1.48.0.exe|1.48.0
Latitude 5330|Latitude_5330_1.33.0.exe|1.33.0
Latitude 5340|Latitude_5340_1.27.1.exe|1.27.1
Latitude 5350|Latitude_5350_1.20.0.exe|1.20.0
Latitude 5440|Latitude_5440_Precision_3480_1.28.1.exe|1.28.1
Latitude 5450|Latitude_5450_Precision_3490_1.20.1.exe|1.20.1
Latitude 5530|Precision_5530_1.42.0.exe|1.42.0
Latitude 5540|Precision_5540_1.42.0.exe|1.42.0
Latitude 7430|Latitude_7X30_1.38.0.exe|1.38.0
Latitude 7440|Latitude_7X40_1.28.1.exe|1.28.1
Latitude 7450|OptiPlex_7450_1.34.0.exe|1.34.0
Micro 7010|OptiPlex_7010_1.33.0_SEMB.exe|1.33.0
Micro QCM1250|Dell_Pro_QBT1250_QBS1250_QBM1250_QCT1250_QCS1250_QCM1250_SEMB_1.12.2.exe|1.12.2
OptiPlex 3000|OptiPlex_3000_1.38.0.exe|1.38.0
OptiPlex 7000|OptiPlex_7000_1.38.0.exe|1.38.0
Precision 5490|OptiPlex_5490_AIO_1.45.0.exe|1.45.0
Precision 5550|Precision_3590_3591_Latitude_5550_1.8.0.exe|1.8.0
Precision 5570|XPS9520_Precision5570_1.39.0_QSL0.exe|1.39.0
Precision 5680|Precision_5680_1_27_0.exe|1.27.0
Precision 5690|Precision_5690_1.18.0.exe|1.18.0
Precision 5820 Tower|Precision_5820_2.48.0.exe|2.48.0
Precision 5860 Tower|Precision_5860_3.5.0.exe|3.5.0
Precision 7560|Precision_7X60_1.44.1.exe|1.44.1
Precision 7670|Precision_7X70_1.8.0.exe|1.8.0
Precision 7680|Precision_7X80_1.9.0.exe|1.9.0
Precision 7770|OptiPlex_7770_7470_1.40.0.exe|1.40.0
Precision 7780|OptiPlex_7780_7480_1.43.0.exe|1.43.0
Precision 7820 Tower|Precision_7820_7920_2.50.0.exe|2.50.0
Precision 7865 Tower|Precision_7865_1.6.1.exe|1.6.1
Precision 7875 Tower|Precision_7875_SHP_02.07.03.exe|2.7.3
Rugged 14 RB14250|Dell_Pro_Rugged_RB14250_RA13250_1.13.1.exe|1.13.1
Tower Plus 7020|OptiPlex_7020_1.22.1_SEMB.exe|1.22.1

View File

@@ -33,30 +33,38 @@ $newName = "E$serial"
Log "Setting computer name to $newName"
Rename-Computer -NewName $newName -Force -ErrorAction SilentlyContinue
# --- Set OOBE complete (must happen before PPKG reboot) ---
Log "Setting OOBE as complete..."
reg add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\OOBE" /v OOBEComplete /t REG_DWORD /d 1 /f | Out-Null
reg add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\OOBE" /v SetupDisplayedEula /t REG_DWORD /d 1 /f | Out-Null
# --- Install provisioning package ---
# This triggers an IMMEDIATE reboot. Nothing below this line executes.
# BPRT app installs (Chrome, Office, Tanium, etc.) happen on the next boot.
# The sync_intune scheduled task (registered by Run-ShopfloorSetup.ps1
# before calling us) fires at the next logon to monitor Intune enrollment.
# IMPORTANT: The PPKG must be installed BEFORE OOBEComplete is set. Bulk
# enrollment PPKGs are designed to run during OOBE; on Windows 11 22H2+ they
# can hang indefinitely if OOBE is already marked complete.
#
# Install-ProvisioningPackage triggers an IMMEDIATE reboot. Nothing below
# this line executes. BPRT app installs (Chrome, Office, Tanium, etc.) happen
# on the next boot. The sync_intune scheduled task (registered by
# Run-ShopfloorSetup.ps1 before calling us) fires at the next logon to
# monitor Intune enrollment.
$ppkgLogDir = "C:\Logs\PPKG"
New-Item -ItemType Directory -Path $ppkgLogDir -Force -ErrorAction SilentlyContinue | Out-Null
Log "Installing provisioning package (PPKG will reboot immediately)..."
Log "PPKG diagnostic logs -> $ppkgLogDir"
try {
Install-ProvisioningPackage -PackagePath $ppkgFile.FullName -ForceInstall -QuietInstall
Install-ProvisioningPackage -PackagePath $ppkgFile.FullName -ForceInstall -QuietInstall -LogsDirectoryPath $ppkgLogDir
Log "Install-ProvisioningPackage returned (reboot may be imminent)."
} catch {
Log "ERROR: Install-ProvisioningPackage failed: $_"
Log "Attempting fallback with Add-ProvisioningPackage..."
try {
Add-ProvisioningPackage -PackagePath $ppkgFile.FullName -ForceInstall -QuietInstall
Add-ProvisioningPackage -PackagePath $ppkgFile.FullName -ForceInstall -QuietInstall -LogsDirectoryPath $ppkgLogDir
Log "Add-ProvisioningPackage returned."
} catch {
Log "ERROR: Fallback also failed: $_"
}
}
# --- Set OOBE complete (only reached if PPKG didn't trigger immediate reboot) ---
Log "Setting OOBE as complete..."
reg add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\OOBE" /v OOBEComplete /t REG_DWORD /d 1 /f | Out-Null
reg add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\OOBE" /v SetupDisplayedEula /t REG_DWORD /d 1 /f | Out-Null
# If we get here, the PPKG didn't reboot immediately. Unlikely but handle it.
Log "PPKG did not trigger immediate reboot. Returning to caller."

View File

@@ -66,11 +66,11 @@ echo 5. Pro Plus Office (x64) with Access
echo 6. Skip enrollment
echo.
set /p enroll=Enter your choice (1-6):
if "%enroll%"=="1" set PPKG=GCCH_Prod_SFLD_NoOffice_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="2" set PPKG=GCCH_Prod_SFLD_StdOffice-x86_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="3" set PPKG=GCCH_Prod_SFLD_StdOffice-x64_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="4" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x86_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="5" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x64_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="1" set PPKG=GCCH_Prod_SFLD_NoOffice_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="2" set PPKG=GCCH_Prod_SFLD_StdOffice-x86_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="3" set PPKG=GCCH_Prod_SFLD_StdOffice-x64_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="4" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x86_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="5" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x64_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="6" set PPKG=
if "%enroll%"=="" goto enroll_menu
@@ -158,7 +158,7 @@ if not "%PCTYPE%"=="" set NEED_ENROLL=1
if "%NEED_ENROLL%"=="0" goto enroll_staged
net use Y: \\10.9.100.1\enrollment /user:pxe-upload pxe /persistent:no
if "%PPKG%"=="" goto enroll_staged
if not exist "Y:\%PPKG%" (
if not exist "Y:\ppkgs\%PPKG%" (
echo WARNING: %PPKG% not found on server. Enrollment will be skipped.
set PPKG=
)
@@ -251,8 +251,8 @@ echo Found Windows at W:
mkdir W:\Enrollment 2>NUL
REM --- Copy site config (drives site-specific values in all setup scripts) ---
if exist "Y:\site-config.json" (
copy /Y "Y:\site-config.json" "W:\Enrollment\site-config.json"
if exist "Y:\config\site-config.json" (
copy /Y "Y:\config\site-config.json" "W:\Enrollment\site-config.json"
echo Copied site-config.json.
) else (
echo WARNING: site-config.json not found on enrollment share.
@@ -260,14 +260,14 @@ if exist "Y:\site-config.json" (
REM --- Copy PPKG if selected ---
if "%PPKG%"=="" goto copy_pctype
copy /Y "Y:\%PPKG%" "W:\Enrollment\%PPKG%"
copy /Y "Y:\ppkgs\%PPKG%" "W:\Enrollment\%PPKG%"
if errorlevel 1 (
echo WARNING: Failed to copy enrollment package.
goto copy_pctype
)
copy /Y "Y:\run-enrollment.ps1" "W:\Enrollment\run-enrollment.ps1"
copy /Y "Y:\wait-for-internet.ps1" "W:\Enrollment\wait-for-internet.ps1"
copy /Y "Y:\migrate-to-wifi.ps1" "W:\Enrollment\migrate-to-wifi.ps1"
copy /Y "Y:\scripts\run-enrollment.ps1" "W:\Enrollment\run-enrollment.ps1"
copy /Y "Y:\scripts\wait-for-internet.ps1" "W:\Enrollment\wait-for-internet.ps1"
copy /Y "Y:\scripts\migrate-to-wifi.ps1" "W:\Enrollment\migrate-to-wifi.ps1"
REM --- Create enroll.cmd at drive root as manual fallback ---
> W:\enroll.cmd (
@@ -307,15 +307,15 @@ if exist "Y:\shopfloor-setup\%PCTYPE%" (
)
REM --- Stage preinstall bundle (apps installed locally to save Azure bandwidth) ---
if exist "Y:\preinstall\preinstall.json" (
if exist "Y:\pre-install\preinstall.json" (
mkdir W:\PreInstall 2>NUL
mkdir W:\PreInstall\installers 2>NUL
copy /Y "Y:\preinstall\preinstall.json" "W:\PreInstall\preinstall.json"
if exist "Y:\preinstall\installers" (
xcopy /E /Y /I "Y:\preinstall\installers" "W:\PreInstall\installers\"
copy /Y "Y:\pre-install\preinstall.json" "W:\PreInstall\preinstall.json"
if exist "Y:\pre-install\installers" (
xcopy /E /Y /I "Y:\pre-install\installers" "W:\PreInstall\installers\"
echo Staged preinstall bundle to W:\PreInstall.
) else (
echo WARNING: Y:\preinstall\installers not found - preinstall.json staged without installers.
echo WARNING: Y:\pre-install\installers not found - preinstall.json staged without installers.
)
) else (
echo No preinstall bundle on PXE server - skipping.
@@ -329,9 +329,9 @@ REM during shopfloor-setup (Azure DSC provisions those creds later), so this
REM bootstrap exists to get the first-install through. Post-imaging, the logon-
REM triggered CMM-Enforce.ps1 takes over from the share.
if /i not "%PCTYPE%"=="CMM" goto skip_cmm_stage
if exist "Y:\cmm-installers\cmm-manifest.json" (
if exist "Y:\installers-post\cmm\cmm-manifest.json" (
mkdir W:\CMM-Install 2>NUL
xcopy /E /Y /I "Y:\cmm-installers" "W:\CMM-Install\"
xcopy /E /Y /I "Y:\installers-post\cmm" "W:\CMM-Install\"
echo Staged CMM bootstrap to W:\CMM-Install.
) else (
echo WARNING: Y:\cmm-installers not found - CMM PC cannot install Hexagon apps at imaging time.

View File

@@ -22,13 +22,13 @@
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
AUTOINSTALL_DIR="$SCRIPT_DIR/autoinstall"
PLAYBOOK_DIR="$SCRIPT_DIR/playbook"
OFFLINE_PKG_DIR="$SCRIPT_DIR/offline-packages"
WEBAPP_DIR="$SCRIPT_DIR/webapp"
PIP_WHEELS_DIR="$SCRIPT_DIR/pip-wheels"
BOOT_TOOLS_DIR="$SCRIPT_DIR/boot-tools"
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
AUTOINSTALL_DIR="$REPO_ROOT/autoinstall"
PLAYBOOK_DIR="$REPO_ROOT/playbook"
OFFLINE_PKG_DIR="$REPO_ROOT/offline-packages"
WEBAPP_DIR="$REPO_ROOT/webapp"
PIP_WHEELS_DIR="$REPO_ROOT/pip-wheels"
BOOT_TOOLS_DIR="$REPO_ROOT/boot-tools"
# --- Validate arguments ---
if [ $# -lt 1 ]; then
@@ -43,7 +43,7 @@ if [ $# -lt 1 ]; then
fi
UBUNTU_ISO="$(realpath "$1")"
OUTPUT_ISO="${2:-$SCRIPT_DIR/pxe-server-proxmox.iso}"
OUTPUT_ISO="${2:-$REPO_ROOT/pxe-server-proxmox.iso}"
# --- Validate prerequisites ---
echo "============================================"
@@ -88,7 +88,7 @@ fi
echo "Ubuntu ISO : $UBUNTU_ISO"
echo "Output ISO : $OUTPUT_ISO"
echo "Source Dir : $SCRIPT_DIR"
echo "Source Dir : $REPO_ROOT"
echo ""
# --- Setup work directory with cleanup trap ---
@@ -249,7 +249,7 @@ if [ -d "$PIP_WHEELS_DIR" ]; then
fi
# WinPE boot files (wimboot, boot.wim, BCD, ipxe.efi, etc.)
BOOT_FILES_DIR="$SCRIPT_DIR/boot-files"
BOOT_FILES_DIR="$REPO_ROOT/boot-files"
if [ -d "$BOOT_FILES_DIR" ]; then
BOOT_FILE_COUNT=0
for bf in "$BOOT_FILES_DIR"/*; do

View File

@@ -17,10 +17,10 @@
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
AUTOINSTALL_DIR="$SCRIPT_DIR/autoinstall"
PLAYBOOK_DIR="$SCRIPT_DIR/playbook"
OFFLINE_PKG_DIR="$SCRIPT_DIR/offline-packages"
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
AUTOINSTALL_DIR="$REPO_ROOT/autoinstall"
PLAYBOOK_DIR="$REPO_ROOT/playbook"
OFFLINE_PKG_DIR="$REPO_ROOT/offline-packages"
# --- Validate arguments ---
if [ $# -lt 2 ]; then
@@ -87,7 +87,7 @@ echo "PXE Server USB Builder"
echo "============================================"
echo "USB Device : $USB_DEV"
echo "ISO : $ISO_PATH"
echo "Source Dir : $SCRIPT_DIR"
echo "Source Dir : $REPO_ROOT"
echo ""
echo "This will ERASE all data on $USB_DEV."
read -rp "Continue? (y/N): " PROCEED
@@ -139,7 +139,7 @@ xorriso -as mkisofs -r \
-no-emul-boot \
-boot-load-size 10160 \
. 2>/dev/null
cd "$SCRIPT_DIR"
cd "$REPO_ROOT"
echo " ISO rebuilt with 'autoinstall' kernel param and 5s GRUB timeout"
echo " Writing patched ISO to $USB_DEV..."
@@ -216,7 +216,7 @@ cp -r "$PLAYBOOK_DIR" "$MOUNT_POINT/playbook"
echo " Copied playbook/"
# Copy webapp
WEBAPP_DIR="$SCRIPT_DIR/webapp"
WEBAPP_DIR="$REPO_ROOT/webapp"
if [ -d "$WEBAPP_DIR" ]; then
mkdir -p "$MOUNT_POINT/webapp"
cp -r "$WEBAPP_DIR/app.py" "$WEBAPP_DIR/requirements.txt" "$MOUNT_POINT/webapp/"
@@ -225,7 +225,7 @@ if [ -d "$WEBAPP_DIR" ]; then
fi
# Copy pip wheels for offline Flask install
PIP_WHEELS_DIR="$SCRIPT_DIR/pip-wheels"
PIP_WHEELS_DIR="$REPO_ROOT/pip-wheels"
if [ ! -d "$PIP_WHEELS_DIR" ]; then
echo " pip-wheels/ not found — downloading now..."
mkdir -p "$PIP_WHEELS_DIR"
@@ -245,7 +245,7 @@ if [ -d "$PIP_WHEELS_DIR" ]; then
fi
# Copy WinPE boot files (wimboot, boot.wim, BCD, ipxe.efi, etc.)
BOOT_FILES_DIR="$SCRIPT_DIR/boot-files"
BOOT_FILES_DIR="$REPO_ROOT/boot-files"
if [ -d "$BOOT_FILES_DIR" ]; then
BOOT_FILE_COUNT=0
for bf in "$BOOT_FILES_DIR"/*; do
@@ -261,7 +261,7 @@ else
fi
# Copy boot tools (Clonezilla, Blancco, Memtest) if prepared
BOOT_TOOLS_DIR="$SCRIPT_DIR/boot-tools"
BOOT_TOOLS_DIR="$REPO_ROOT/boot-tools"
if [ -d "$BOOT_TOOLS_DIR" ]; then
cp -r "$BOOT_TOOLS_DIR" "$MOUNT_POINT/boot-tools"
TOOLS_SIZE=$(du -sh "$MOUNT_POINT/boot-tools" | cut -f1)
@@ -271,25 +271,68 @@ else
fi
# Copy enrollment directory (PPKGs, run-enrollment.ps1) if present
ENROLLMENT_DIR="$SCRIPT_DIR/enrollment"
# FAT32 has a 4GB max file size; files larger than that are split into chunks
# that the playbook reassembles with `cat`.
ENROLLMENT_DIR="$REPO_ROOT/enrollment"
FAT32_MAX=$((3500 * 1024 * 1024)) # 3500 MiB chunks, safely under 4GiB FAT32 limit
if [ -d "$ENROLLMENT_DIR" ]; then
mkdir -p "$MOUNT_POINT/enrollment"
cp -r "$ENROLLMENT_DIR"/* "$MOUNT_POINT/enrollment/" 2>/dev/null || true
PPKG_COUNT=$(find "$MOUNT_POINT/enrollment" -name '*.ppkg' 2>/dev/null | wc -l)
SPLIT_COUNT=0
for f in "$ENROLLMENT_DIR"/*; do
[ -e "$f" ] || continue
bn="$(basename "$f")"
if [ -f "$f" ] && [ "$(stat -c%s "$f")" -gt "$FAT32_MAX" ]; then
echo " Splitting $bn (>$((FAT32_MAX / 1024 / 1024))M) into chunks..."
split -b "$FAT32_MAX" -d -a 2 "$f" "$MOUNT_POINT/enrollment/${bn}.part."
SPLIT_COUNT=$((SPLIT_COUNT + 1))
else
cp -r "$f" "$MOUNT_POINT/enrollment/"
fi
done
PPKG_COUNT=$(find "$ENROLLMENT_DIR" -maxdepth 1 -name '*.ppkg' 2>/dev/null | wc -l)
ENROLL_SIZE=$(du -sh "$MOUNT_POINT/enrollment" | cut -f1)
echo " Copied enrollment/ ($ENROLL_SIZE, $PPKG_COUNT PPKGs)"
echo " Copied enrollment/ ($ENROLL_SIZE, $PPKG_COUNT PPKGs, $SPLIT_COUNT split)"
else
echo " No enrollment/ directory found (PPKGs can be uploaded via webapp later)"
fi
# Copy BIOS update binaries if staged
BIOS_DIR="$REPO_ROOT/bios-staging"
if [ -d "$BIOS_DIR" ] && [ "$(ls -A "$BIOS_DIR" 2>/dev/null)" ]; then
echo " Copying BIOS update binaries from bios-staging/..."
mkdir -p "$MOUNT_POINT/bios"
cp -r "$BIOS_DIR"/* "$MOUNT_POINT/bios/" 2>/dev/null || true
BIOS_COUNT=$(find "$MOUNT_POINT/bios" -name '*.exe' 2>/dev/null | wc -l)
BIOS_SIZE=$(du -sh "$MOUNT_POINT/bios" | cut -f1)
echo " Copied bios/ ($BIOS_SIZE, $BIOS_COUNT files)"
else
echo " No bios-staging/ found (BIOS updates can be pushed via download-drivers.py later)"
fi
# Copy Dell driver packs if staged
DRIVERS_DIR="$SCRIPT_DIR/drivers-staging"
# Files larger than the FAT32 4GB limit are split into chunks; the playbook
# reassembles them on the server.
DRIVERS_DIR="$REPO_ROOT/drivers-staging"
if [ -d "$DRIVERS_DIR" ] && [ "$(ls -A "$DRIVERS_DIR" 2>/dev/null)" ]; then
echo " Copying Dell driver packs from drivers-staging/..."
mkdir -p "$MOUNT_POINT/drivers"
cp -r "$DRIVERS_DIR"/* "$MOUNT_POINT/drivers/" 2>/dev/null || true
DRV_SPLIT=0
# Mirror directory tree first (fast)
(cd "$DRIVERS_DIR" && find . -type d -exec mkdir -p "$MOUNT_POINT/drivers/{}" \;)
# Copy files <4GB directly, split files >=4GB into chunks
while IFS= read -r f; do
rel="${f#$DRIVERS_DIR/}"
dest="$MOUNT_POINT/drivers/$rel"
if [ "$(stat -c%s "$f")" -gt "$FAT32_MAX" ]; then
echo " Splitting $rel..."
split -b "$FAT32_MAX" -d -a 2 "$f" "${dest}.part."
DRV_SPLIT=$((DRV_SPLIT + 1))
else
cp "$f" "$dest"
fi
done < <(find "$DRIVERS_DIR" -type f)
DRIVERS_SIZE=$(du -sh "$MOUNT_POINT/drivers" | cut -f1)
echo " Copied drivers/ ($DRIVERS_SIZE)"
echo " Copied drivers/ ($DRIVERS_SIZE, $DRV_SPLIT split)"
else
echo " No drivers-staging/ found (drivers can be downloaded later)"
fi

50
scripts/deploy-bios.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/bin/bash
# deploy-bios.sh - Deploy BIOS update files to a running PXE server
# Copies Flash64W.exe, BIOS binaries, models.txt, and check-bios.cmd
#
# Usage: ./deploy-bios.sh [server-ip]
# Default server: 10.9.100.1
set -e
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
PXE_SERVER="${1:-10.9.100.1}"
PXE_USER="pxe"
PXE_PASS="pxe"
REMOTE_DIR="/srv/samba/enrollment/BIOS"
BIOS_DIR="$REPO_ROOT/bios-staging"
MANIFEST="$REPO_ROOT/playbook/shopfloor-setup/BIOS/models.txt"
CHECK_SCRIPT="$REPO_ROOT/playbook/shopfloor-setup/BIOS/check-bios.cmd"
SSH="sshpass -p $PXE_PASS ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 $PXE_USER@$PXE_SERVER"
SCP="sshpass -p $PXE_PASS scp -o StrictHostKeyChecking=no -o ConnectTimeout=10"
# Verify sources exist
if [ ! -d "$BIOS_DIR" ] || [ -z "$(ls -A "$BIOS_DIR" 2>/dev/null)" ]; then
echo "ERROR: bios-staging/ is empty or missing. Run ./pull-bios.sh first."
exit 1
fi
if [ ! -f "$MANIFEST" ]; then
echo "ERROR: playbook/shopfloor-setup/BIOS/models.txt not found."
exit 1
fi
echo "Deploying BIOS files to $PXE_SERVER..."
# Create remote directory
$SSH "sudo mkdir -p '$REMOTE_DIR' && sudo chown $PXE_USER:$PXE_USER '$REMOTE_DIR'"
# Copy check-bios.cmd and models.txt
echo " Copying check-bios.cmd + models.txt..."
$SCP "$CHECK_SCRIPT" "$MANIFEST" "$PXE_USER@$PXE_SERVER:$REMOTE_DIR/"
# Copy BIOS binaries
COUNT=$(find "$BIOS_DIR" -name '*.exe' | wc -l)
SIZE=$(du -sh "$BIOS_DIR" | cut -f1)
echo " Copying $COUNT BIOS binaries ($SIZE)..."
$SCP "$BIOS_DIR"/*.exe "$PXE_USER@$PXE_SERVER:$REMOTE_DIR/"
# Verify
REMOTE_COUNT=$($SSH "find '$REMOTE_DIR' -name '*.exe' | wc -l")
echo "Done: $REMOTE_COUNT files on $PXE_SERVER:$REMOTE_DIR"

View File

@@ -30,7 +30,7 @@ if [ "${IN_DOCKER:-}" != "1" ] && [ "$HOST_CODENAME" != "noble" ]; then
fi
SCRIPT_PATH="$(readlink -f "$0")"
REPO_DIR="$(dirname "$SCRIPT_PATH")"
REPO_DIR="$(cd "$(dirname "$SCRIPT_PATH")"/.. && pwd)"
mkdir -p "$OUT_DIR_ABS"
docker run --rm -i \
@@ -39,7 +39,7 @@ if [ "${IN_DOCKER:-}" != "1" ] && [ "$HOST_CODENAME" != "noble" ]; then
-e IN_DOCKER=1 \
-w /repo \
ubuntu:24.04 \
bash -c "apt-get update -qq && apt-get install -y --no-install-recommends sudo python3-pip python3-setuptools python3-wheel ca-certificates >/dev/null && /repo/download-packages.sh /out"
bash -c "apt-get update -qq && apt-get install -y --no-install-recommends sudo python3-pip python3-setuptools python3-wheel ca-certificates >/dev/null && /repo/scripts/download-packages.sh /out"
echo ""
echo "============================================"
@@ -127,8 +127,8 @@ echo " $DEB_COUNT packages ($TOTAL_SIZE)"
# Download pip wheels for Flask webapp (offline install)
echo "[4/4] Downloading Python wheels for webapp..."
# Place pip-wheels next to the script (or /repo when in docker), not next to OUT_DIR
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PIP_DIR="$SCRIPT_DIR/pip-wheels"
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
PIP_DIR="$REPO_ROOT/pip-wheels"
mkdir -p "$PIP_DIR"
pip3 download -d "$PIP_DIR" flask lxml 2>&1 | tail -5

View File

@@ -15,13 +15,13 @@
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
OUT_DIR="$SCRIPT_DIR/boot-tools"
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
OUT_DIR="$REPO_ROOT/boot-tools"
BLANCCO_ISO="${1:-}"
# Auto-detect Blancco ISO in project directory
if [ -z "$BLANCCO_ISO" ]; then
BLANCCO_ISO=$(find "$SCRIPT_DIR" -maxdepth 1 -name '*DriveEraser*.iso' -o -name '*blancco*.iso' 2>/dev/null | head -1)
BLANCCO_ISO=$(find "$REPO_ROOT" -maxdepth 1 -name '*DriveEraser*.iso' -o -name '*blancco*.iso' 2>/dev/null | head -1)
fi
mkdir -p "$OUT_DIR"/{clonezilla,blancco,memtest}
@@ -192,7 +192,7 @@ PYEOF
dd bs=512 conv=sync 2>/dev/null > "$OUT_DIR/blancco/config.img"
echo " Reports: SMB blancco@10.9.100.1/blancco-reports, bootable report enabled"
fi
cd "$SCRIPT_DIR"
cd "$REPO_ROOT"
rm -rf "$CFGTMP"
fi
@@ -241,7 +241,7 @@ PYEOF
# --- Build GRUB EFI binary for Blancco chainload ---
# Broadcom iPXE can't pass initrd to Linux kernels in UEFI mode.
# Solution: iPXE chains to grubx64.efi, which loads kernel+initrd via TFTP.
GRUB_CFG="$SCRIPT_DIR/boot-tools/blancco/grub-blancco.cfg"
GRUB_CFG="$REPO_ROOT/boot-tools/blancco/grub-blancco.cfg"
if [ -f "$GRUB_CFG" ]; then
if command -v grub-mkstandalone &>/dev/null; then
echo " Building grubx64.efi (GRUB chainload for Blancco)..."

21
scripts/pull-bios.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# pull-bios.sh - Pull BIOS update binaries from prod PXE server to bios-staging/
# Run this with the USB NIC plugged in, before building the USB.
set -e
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
DEST="$REPO_ROOT/bios-staging"
PXE_SERVER="10.9.100.1"
PXE_USER="pxe"
PXE_PASS="pxe"
mkdir -p "$DEST"
echo "Pulling BIOS binaries from $PXE_SERVER..."
sshpass -p "$PXE_PASS" scp -o StrictHostKeyChecking=no -o ConnectTimeout=10 \
"$PXE_USER@$PXE_SERVER:/srv/samba/enrollment/BIOS/*.exe" "$DEST/"
COUNT=$(find "$DEST" -name '*.exe' | wc -l)
SIZE=$(du -sh "$DEST" | cut -f1)
echo "Done: $COUNT files ($SIZE) in bios-staging/"

View File

@@ -1,129 +0,0 @@
Purpose
Document a repeatable, “build-from-scratch” procedure for deploying an Ubuntu-based PXE boot server that can host GE Aerospace Windows PE images.
Prerequisites
Hardware: Server or PC with ≥ 8 GB RAM, ≥ 250 GB disk, and one NIC (one for build / Internet, one for isolated PXE LAN)
https://myaccess.microsoft.us/@ge.onmicrosoft.us#/access-packages/active
EPM Rufus Exception Request
EPM DT Functions
DLP - Encrypted Removable (USB) Long Term Access
Software:
Ubuntu Server 24.04 ISO
Rufus (latest)
playbook folder containing pxe_server_setup.yml and supporting files
GE Aerospace Media Creator LITE (for caching WinPE images)
Two USB thumb drives (one ≥ 8 GB for Ubuntu install; one ≥ 32 GB for WinPE media)
Step-by-Step Procedure
Create the Ubuntu Server installer USB
1.1 Download Ubuntu Server 24.04 from https://ubuntu.com/download/server.
1.2 Download and run Rufus (https://rufus.ie/en/).
1.3 Insert an empty USB, select it in Rufus.
1.4 Click Select, browse to the Ubuntu ISO, then click Start.
1.5 When Rufus finishes, copy your playbook folder to the root of that same USB, then eject it safely.
Install Ubuntu on the PXE server
2.1 Insert the USB into the target machine and power on.
2.2 Press F12 (or the vendors one-time boot key) and choose the USB device.
2.3 Follow Ubuntus installer;
Network configuration screen.
Select the fist option select give it random network and IPv4 address
Then select WiFi and choose the guest network.
Follow the prompts and enter the information for your network.
Click done.
You do not need a proxy hit done.
For mirror address add nothing and hit done. The download should start.
After that select next
You'll be in file system summary: Hit done, box will pop up "confirm destructive action" select "continue"
Configure your profile. Done
Skip the upgrade to ubuntu pro
No ssh
Don't select featured server snaps just select done
Ubuntu will install…..then reboot your system
2.4 Create a user (e.g., pxe) with a simple, temporary password (change later).
Prepare the OS
3.1 Log in as the user you created.
3.2 Update the system:
bash
Copy
sudo apt update && sudo apt upgrade -y
3.3 Install Ansible:
bash
Copy
sudo apt install ansible -y
Mount the installer USB and run the playbook
4.1 Identify the USB device:
bash
Copy
lsblk
Note the device (e.g., /dev/sda1).
4.2 Mount it and run the playbook:
bash
Copy
sudo mkdir -p /mnt/usb
sudo mount /dev/sda1 /mnt/usb
cd /mnt/usb/playbook
ansible-playbook pxe_server_setup.yml
4.3 When Ansible finishes, umount the USB:
bash
Copy
cd ~
sudo umount /mnt/usb
Cache Windows PE images
5.1 On a separate workstation, use GE Aerospace Media Creator LITE to cache all desired images (or start with one).
5.2 Create a WinPE USB using the same tool and eject it safely.
Import WinPE content to the PXE share
6.1 Insert the WinPE USB into the PXE server.
6.2 Find the new device (e.g., /dev/sdb2) with lsblk.
6.3 Mount it and copy files:
bash
Copy
sudo mkdir -p /mnt/usb2
sudo mount /dev/sdb2 /mnt/usb2
sudo cp -r /mnt/usb2/. /srv/samba/winpeapps/standard
sudo umount /mnt/usb2
Finalise and isolate
7.1 Reboot the server:
bash
Copy
sudo reboot
7.2 After it comes back up, move the primary NIC from the Internet-enabled network to the isolated switch that will serve PXE clients.
6. Verification
Connect a test workstation to the isolated switch.
In BIOS/UEFI, set Network Boot (PXE) as first boot, then boot.
Confirm the client pulls an IP from the PXE server and sees the WinPE menu.
Launch a WinPE image to ensure TFTP, HTTP (NBD), and SMB shares respond correctly.

View File

@@ -66,11 +66,11 @@ echo 5. Pro Plus Office (x64) with Access
echo 6. Skip enrollment
echo.
set /p enroll=Enter your choice (1-6):
if "%enroll%"=="1" set PPKG=GCCH_Prod_SFLD_NoOffice_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="2" set PPKG=GCCH_Prod_SFLD_StdOffice-x86_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="3" set PPKG=GCCH_Prod_SFLD_StdOffice-x64_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="4" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x86_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="5" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x64_US_Exp_20260430_v4.8.ppkg
if "%enroll%"=="1" set PPKG=GCCH_Prod_SFLD_NoOffice_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="2" set PPKG=GCCH_Prod_SFLD_StdOffice-x86_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="3" set PPKG=GCCH_Prod_SFLD_StdOffice-x64_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="4" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x86_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="5" set PPKG=GCCH_Prod_SFLD_ProPlusOffice-x64_US_Exp_20260430_v4.10.ppkg
if "%enroll%"=="6" set PPKG=
if "%enroll%"=="" goto enroll_menu
@@ -158,7 +158,7 @@ if not "%PCTYPE%"=="" set NEED_ENROLL=1
if "%NEED_ENROLL%"=="0" goto enroll_staged
net use Y: \\10.9.100.1\enrollment /user:pxe-upload pxe /persistent:no
if "%PPKG%"=="" goto enroll_staged
if not exist "Y:\%PPKG%" (
if not exist "Y:\ppkgs\%PPKG%" (
echo WARNING: %PPKG% not found on server. Enrollment will be skipped.
set PPKG=
)
@@ -251,8 +251,8 @@ echo Found Windows at W:
mkdir W:\Enrollment 2>NUL
REM --- Copy site config (drives site-specific values in all setup scripts) ---
if exist "Y:\site-config.json" (
copy /Y "Y:\site-config.json" "W:\Enrollment\site-config.json"
if exist "Y:\config\site-config.json" (
copy /Y "Y:\config\site-config.json" "W:\Enrollment\site-config.json"
echo Copied site-config.json.
) else (
echo WARNING: site-config.json not found on enrollment share.
@@ -260,14 +260,14 @@ if exist "Y:\site-config.json" (
REM --- Copy PPKG if selected ---
if "%PPKG%"=="" goto copy_pctype
copy /Y "Y:\%PPKG%" "W:\Enrollment\%PPKG%"
copy /Y "Y:\ppkgs\%PPKG%" "W:\Enrollment\%PPKG%"
if errorlevel 1 (
echo WARNING: Failed to copy enrollment package.
goto copy_pctype
)
copy /Y "Y:\run-enrollment.ps1" "W:\Enrollment\run-enrollment.ps1"
copy /Y "Y:\wait-for-internet.ps1" "W:\Enrollment\wait-for-internet.ps1"
copy /Y "Y:\migrate-to-wifi.ps1" "W:\Enrollment\migrate-to-wifi.ps1"
copy /Y "Y:\scripts\run-enrollment.ps1" "W:\Enrollment\run-enrollment.ps1"
copy /Y "Y:\scripts\wait-for-internet.ps1" "W:\Enrollment\wait-for-internet.ps1"
copy /Y "Y:\scripts\migrate-to-wifi.ps1" "W:\Enrollment\migrate-to-wifi.ps1"
REM --- Create enroll.cmd at drive root as manual fallback ---
> W:\enroll.cmd (
@@ -307,15 +307,15 @@ if exist "Y:\shopfloor-setup\%PCTYPE%" (
)
REM --- Stage preinstall bundle (apps installed locally to save Azure bandwidth) ---
if exist "Y:\preinstall\preinstall.json" (
if exist "Y:\pre-install\preinstall.json" (
mkdir W:\PreInstall 2>NUL
mkdir W:\PreInstall\installers 2>NUL
copy /Y "Y:\preinstall\preinstall.json" "W:\PreInstall\preinstall.json"
if exist "Y:\preinstall\installers" (
xcopy /E /Y /I "Y:\preinstall\installers" "W:\PreInstall\installers\"
copy /Y "Y:\pre-install\preinstall.json" "W:\PreInstall\preinstall.json"
if exist "Y:\pre-install\installers" (
xcopy /E /Y /I "Y:\pre-install\installers" "W:\PreInstall\installers\"
echo Staged preinstall bundle to W:\PreInstall.
) else (
echo WARNING: Y:\preinstall\installers not found - preinstall.json staged without installers.
echo WARNING: Y:\pre-install\installers not found - preinstall.json staged without installers.
)
) else (
echo No preinstall bundle on PXE server - skipping.
@@ -329,9 +329,9 @@ REM during shopfloor-setup (Azure DSC provisions those creds later), so this
REM bootstrap exists to get the first-install through. Post-imaging, the logon-
REM triggered CMM-Enforce.ps1 takes over from the share.
if /i not "%PCTYPE%"=="CMM" goto skip_cmm_stage
if exist "Y:\cmm-installers\cmm-manifest.json" (
if exist "Y:\installers-post\cmm\cmm-manifest.json" (
mkdir W:\CMM-Install 2>NUL
xcopy /E /Y /I "Y:\cmm-installers" "W:\CMM-Install\"
xcopy /E /Y /I "Y:\installers-post\cmm" "W:\CMM-Install\"
echo Staged CMM bootstrap to W:\CMM-Install.
) else (
echo WARNING: Y:\cmm-installers not found - CMM PC cannot install Hexagon apps at imaging time.

View File

@@ -1,332 +0,0 @@
#!/bin/bash
#
# test-lab.sh — Full PXE lab: server + client VMs on an isolated network
#
# Creates an isolated libvirt network, boots the PXE server from the
# Proxmox installer ISO, then launches a UEFI PXE client to test the
# full boot chain (DHCP -> TFTP -> iPXE -> boot menu).
#
# Usage:
# ./test-lab.sh /path/to/ubuntu-24.04.iso # Launch server
# ./test-lab.sh --client # Launch PXE client
# ./test-lab.sh --status # Check if server is ready
# ./test-lab.sh --destroy # Remove everything
#
# Workflow:
# 1. Run the script with the Ubuntu ISO — server VM starts installing
# 2. Wait ~15 minutes (monitor with: sudo virsh console pxe-lab-server)
# 3. Run --status to check if PXE services are up
# 4. Run --client to launch a PXE client VM
# 5. Open virt-viewer to watch the client PXE boot:
# virt-viewer pxe-lab-client
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
NET_NAME="pxe-lab"
SERVER_NAME="pxe-lab-server"
CLIENT_NAME="pxe-lab-client"
SERVER_DISK="/var/lib/libvirt/images/${SERVER_NAME}.qcow2"
PROXMOX_ISO="$SCRIPT_DIR/pxe-server-proxmox.iso"
VM_RAM=4096
VM_CPUS=2
VM_DISK_SIZE=40 # GB
# --- Helper: check if we can run virsh ---
check_virsh() {
if ! virsh net-list &>/dev/null; then
echo "ERROR: Cannot connect to libvirt. Are you in the 'libvirt' group?"
echo " sudo usermod -aG libvirt $USER && newgrp libvirt"
exit 1
fi
}
# --- Helper: ensure network exists ---
ensure_network() {
if ! virsh net-info "$NET_NAME" &>/dev/null; then
echo "Creating isolated network ($NET_NAME)..."
NET_XML=$(mktemp)
cat > "$NET_XML" << EOF
<network>
<name>$NET_NAME</name>
<bridge name="virbr-pxe" stp="on" delay="0"/>
</network>
EOF
virsh net-define "$NET_XML" >/dev/null
rm "$NET_XML"
fi
if ! virsh net-info "$NET_NAME" 2>/dev/null | grep -q "Active:.*yes"; then
virsh net-start "$NET_NAME" >/dev/null
fi
}
# =====================================================================
# --destroy: Remove everything
# =====================================================================
if [ "${1:-}" = "--destroy" ]; then
check_virsh
echo "Destroying PXE lab environment..."
virsh destroy "$CLIENT_NAME" 2>/dev/null || true
virsh undefine "$CLIENT_NAME" --nvram 2>/dev/null || true
virsh vol-delete "${CLIENT_NAME}.qcow2" --pool default 2>/dev/null || true
virsh destroy "$SERVER_NAME" 2>/dev/null || true
virsh undefine "$SERVER_NAME" 2>/dev/null || true
virsh vol-delete "${SERVER_NAME}.qcow2" --pool default 2>/dev/null || true
rm -f "/tmp/${SERVER_NAME}-vmlinuz" "/tmp/${SERVER_NAME}-initrd"
virsh net-destroy "$NET_NAME" 2>/dev/null || true
virsh net-undefine "$NET_NAME" 2>/dev/null || true
echo "Done."
exit 0
fi
# =====================================================================
# --status: Check if PXE server is ready
# =====================================================================
if [ "${1:-}" = "--status" ]; then
check_virsh
echo "PXE Lab Status"
echo "============================================"
# Network
if virsh net-info "$NET_NAME" &>/dev/null; then
echo " Network ($NET_NAME): $(virsh net-info "$NET_NAME" 2>/dev/null | grep Active | awk '{print $2}')"
else
echo " Network ($NET_NAME): not defined"
fi
# Server VM
if virsh dominfo "$SERVER_NAME" &>/dev/null; then
STATE=$(virsh domstate "$SERVER_NAME" 2>/dev/null)
echo " Server ($SERVER_NAME): $STATE"
else
echo " Server ($SERVER_NAME): not defined"
fi
# Client VM
if virsh dominfo "$CLIENT_NAME" &>/dev/null; then
STATE=$(virsh domstate "$CLIENT_NAME" 2>/dev/null)
echo " Client ($CLIENT_NAME): $STATE"
else
echo " Client ($CLIENT_NAME): not defined"
fi
# Try to check PXE services on the server
# Add a temporary IP to the bridge so we can reach the server
echo ""
echo "Checking PXE server services..."
BRIDGE_HAS_IP=false
ADDED_IP=false
if ip addr show virbr-pxe 2>/dev/null | grep -q "10.9.100.254"; then
BRIDGE_HAS_IP=true
else
# Need sudo for IP manipulation — try it, skip if unavailable
if sudo -n ip addr add 10.9.100.254/24 dev virbr-pxe 2>/dev/null; then
BRIDGE_HAS_IP=true
ADDED_IP=true
fi
fi
if [ "$BRIDGE_HAS_IP" = true ]; then
# Check each service with a short timeout
for check in \
"DHCP/TFTP (dnsmasq):10.9.100.1:69:udp" \
"HTTP (Apache):10.9.100.1:80:tcp" \
"iPXE boot script:10.9.100.1:4433:tcp" \
"Samba:10.9.100.1:445:tcp" \
"Webapp:10.9.100.1:9009:tcp"; do
LABEL="${check%%:*}"
REST="${check#*:}"
HOST="${REST%%:*}"
REST="${REST#*:}"
PORT="${REST%%:*}"
PROTO="${REST#*:}"
if [ "$PROTO" = "tcp" ]; then
if timeout 2 bash -c "echo >/dev/tcp/$HOST/$PORT" 2>/dev/null; then
echo " [UP] $LABEL (port $PORT)"
else
echo " [DOWN] $LABEL (port $PORT)"
fi
else
# UDP — just check if host is reachable
if ping -c1 -W1 "$HOST" &>/dev/null; then
echo " [PING] $LABEL (host reachable)"
else
echo " [DOWN] $LABEL (host unreachable)"
fi
fi
done
# Clean up the temporary IP (only if we added it)
if [ "$ADDED_IP" = true ]; then
sudo -n ip addr del 10.9.100.254/24 dev virbr-pxe 2>/dev/null || true
fi
else
echo " (Cannot reach server — bridge not available)"
echo " Use 'sudo virsh console $SERVER_NAME' to check manually"
fi
echo ""
echo "Commands:"
echo " sudo virsh console $SERVER_NAME # Server serial console"
echo " virt-viewer $CLIENT_NAME # Client VNC display"
exit 0
fi
# =====================================================================
# --client: Launch PXE client VM
# =====================================================================
if [ "${1:-}" = "--client" ]; then
check_virsh
ensure_network
# Check if server VM exists and is running
if ! virsh domstate "$SERVER_NAME" 2>/dev/null | grep -q "running"; then
echo "WARNING: Server VM ($SERVER_NAME) is not running."
echo " The PXE client needs the server for DHCP and boot files."
read -rp " Continue anyway? (y/N): " PROCEED
if [[ ! "$PROCEED" =~ ^[Yy]$ ]]; then
exit 1
fi
fi
# Remove existing client if present
if virsh dominfo "$CLIENT_NAME" &>/dev/null; then
echo "Removing existing client VM..."
virsh destroy "$CLIENT_NAME" 2>/dev/null || true
virsh undefine "$CLIENT_NAME" --nvram 2>/dev/null || true
virsh vol-delete "${CLIENT_NAME}.qcow2" --pool default 2>/dev/null || true
fi
echo "Launching PXE client ($CLIENT_NAME)..."
echo " UEFI PXE boot on network: $NET_NAME"
echo ""
virsh vol-create-as default "${CLIENT_NAME}.qcow2" "${VM_DISK_SIZE}G" --format qcow2 >/dev/null
virt-install \
--name "$CLIENT_NAME" \
--memory "$VM_RAM" \
--vcpus "$VM_CPUS" \
--disk "vol=default/${CLIENT_NAME}.qcow2" \
--network network="$NET_NAME",model=virtio \
--os-variant ubuntu24.04 \
--boot uefi,network \
--graphics vnc,listen=0.0.0.0 \
--noautoconsole
# Get VNC port
VNC_PORT=$(virsh vncdisplay "$CLIENT_NAME" 2>/dev/null | sed 's/://' || echo "?")
VNC_PORT=$((5900 + VNC_PORT))
echo ""
echo "============================================"
echo "PXE client launched!"
echo "============================================"
echo ""
echo "Watch the PXE boot:"
echo " virt-viewer $CLIENT_NAME"
echo " (or VNC to localhost:$VNC_PORT)"
echo ""
echo "Expected boot sequence:"
echo " 1. UEFI firmware -> PXE boot"
echo " 2. DHCP from server (10.9.100.x)"
echo " 3. TFTP download ipxe.efi"
echo " 4. iPXE loads boot menu from port 4433"
echo " 5. GE Aerospace PXE Boot Menu appears"
echo ""
echo "Manage:"
echo " sudo virsh reboot $CLIENT_NAME # Retry PXE boot"
echo " sudo virsh destroy $CLIENT_NAME # Stop client"
echo " $0 --destroy # Remove everything"
exit 0
fi
# =====================================================================
# Default: Launch PXE server VM
# =====================================================================
check_virsh
UBUNTU_ISO="${1:-}"
if [ -z "$UBUNTU_ISO" ] || [ ! -f "$UBUNTU_ISO" ]; then
echo "Usage: sudo $0 /path/to/ubuntu-24.04-live-server-amd64.iso"
echo ""
echo "Commands:"
echo " $0 /path/to/ubuntu.iso Launch PXE server VM"
echo " $0 --client Launch PXE client VM"
echo " $0 --status Check server readiness"
echo " $0 --destroy Remove everything"
exit 1
fi
# Check if Proxmox ISO exists, build if not
if [ ! -f "$PROXMOX_ISO" ]; then
echo "Proxmox ISO not found. Building it first..."
echo ""
"$SCRIPT_DIR/build-proxmox-iso.sh" "$UBUNTU_ISO"
echo ""
fi
# Check server doesn't already exist
if virsh dominfo "$SERVER_NAME" &>/dev/null; then
echo "ERROR: Server VM already exists. Destroy first with: sudo $0 --destroy"
exit 1
fi
echo "============================================"
echo "PXE Lab Environment Setup"
echo "============================================"
echo ""
# --- Step 1: Create isolated network ---
echo "[1/3] Setting up isolated network ($NET_NAME)..."
ensure_network
echo " Bridge: virbr-pxe (isolated, no host DHCP)"
# --- Step 2: Extract kernel/initrd for direct boot ---
echo "[2/3] Extracting kernel and initrd from ISO..."
KERNEL="/tmp/${SERVER_NAME}-vmlinuz"
INITRD="/tmp/${SERVER_NAME}-initrd"
7z e -o/tmp -y "$PROXMOX_ISO" casper/vmlinuz casper/initrd >/dev/null 2>&1
mv /tmp/vmlinuz "$KERNEL"
mv /tmp/initrd "$INITRD"
echo " Extracted vmlinuz and initrd"
# --- Step 3: Launch server VM ---
echo "[3/3] Launching PXE server ($SERVER_NAME)..."
virsh vol-create-as default "${SERVER_NAME}.qcow2" "${VM_DISK_SIZE}G" --format qcow2 >/dev/null
virt-install \
--name "$SERVER_NAME" \
--memory "$VM_RAM" \
--vcpus "$VM_CPUS" \
--disk "vol=default/${SERVER_NAME}.qcow2" \
--disk path="$PROXMOX_ISO",device=cdrom,readonly=on \
--network network="$NET_NAME" \
--os-variant ubuntu24.04 \
--graphics none \
--console pty,target_type=serial \
--install kernel="$KERNEL",initrd="$INITRD",kernel_args="console=ttyS0,115200n8 autoinstall ds=nocloud\;s=/cdrom/server/" \
--noautoconsole
echo ""
echo "============================================"
echo "PXE server VM launched!"
echo "============================================"
echo ""
echo "The autoinstall + first-boot will take ~15 minutes."
echo ""
echo "Step 1 — Monitor the server install:"
echo " virsh console $SERVER_NAME"
echo " (Ctrl+] to detach)"
echo ""
echo "Step 2 — Check when services are ready:"
echo " $0 --status"
echo ""
echo "Step 3 — Launch a PXE client to test booting:"
echo " $0 --client"
echo ""
echo "Cleanup:"
echo " $0 --destroy"
echo ""

View File

@@ -1,187 +0,0 @@
#!/bin/bash
#
# test-vm.sh — Create a test VM to validate the PXE server setup
#
# This script:
# 1. Builds a CIDATA ISO with autoinstall config, packages, playbook, and webapp
# 2. Launches an Ubuntu 24.04 Server VM on the default libvirt network
# 3. The VM auto-installs, then runs the Ansible playbook on first boot
#
# Usage:
# ./test-vm.sh /path/to/ubuntu-24.04-live-server-amd64.iso
#
# After install completes (~10-15 min), access via:
# virsh console pxe-test (serial console, always works)
# ssh pxe@<dhcp-ip> (check: virsh domifaddr pxe-test)
#
# To clean up:
# ./test-vm.sh --destroy
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
VM_NAME="pxe-test"
VM_DISK="/var/lib/libvirt/images/${VM_NAME}.qcow2"
CIDATA_ISO="${SCRIPT_DIR}/.${VM_NAME}-cidata.iso"
VM_RAM=4096
VM_CPUS=2
VM_DISK_SIZE=40 # GB
# --- Handle --destroy flag ---
if [ "${1:-}" = "--destroy" ]; then
echo "Destroying test environment..."
virsh destroy "$VM_NAME" 2>/dev/null || true
virsh undefine "$VM_NAME" 2>/dev/null || true
virsh vol-delete "${VM_NAME}.qcow2" --pool default 2>/dev/null || true
rm -f "$CIDATA_ISO"
rm -f "/tmp/${VM_NAME}-vmlinuz" "/tmp/${VM_NAME}-initrd"
echo "Done."
exit 0
fi
# --- Validate Ubuntu ISO argument ---
UBUNTU_ISO="${1:-}"
if [ -z "$UBUNTU_ISO" ] || [ ! -f "$UBUNTU_ISO" ]; then
echo "Usage: $0 /path/to/ubuntu-24.04-live-server-amd64.iso"
echo ""
echo "Download from: https://ubuntu.com/download/server"
echo ""
echo "Other commands:"
echo " $0 --destroy Remove the test VM and network"
exit 1
fi
echo "============================================"
echo "PXE Server Test VM Setup"
echo "============================================"
echo ""
# --- Step 1: Build CIDATA ISO ---
echo "[1/4] Building CIDATA ISO..."
CIDATA_DIR=$(mktemp -d)
# Autoinstall config
cp "$SCRIPT_DIR/autoinstall/user-data" "$CIDATA_DIR/user-data"
touch "$CIDATA_DIR/meta-data"
# Offline .deb packages
if [ -d "$SCRIPT_DIR/offline-packages" ]; then
mkdir -p "$CIDATA_DIR/packages"
cp "$SCRIPT_DIR/offline-packages/"*.deb "$CIDATA_DIR/packages/" 2>/dev/null || true
echo " Copied $(ls -1 "$CIDATA_DIR/packages/"*.deb 2>/dev/null | wc -l) .deb packages"
else
echo " WARNING: No offline-packages/ directory. Run download-packages.sh first."
fi
# Ansible playbook
mkdir -p "$CIDATA_DIR/playbook"
cp "$SCRIPT_DIR/playbook/"* "$CIDATA_DIR/playbook/" 2>/dev/null || true
echo " Copied playbook/"
# Webapp
if [ -d "$SCRIPT_DIR/webapp" ]; then
mkdir -p "$CIDATA_DIR/webapp"
cp "$SCRIPT_DIR/webapp/app.py" "$SCRIPT_DIR/webapp/requirements.txt" "$CIDATA_DIR/webapp/"
cp -r "$SCRIPT_DIR/webapp/templates" "$SCRIPT_DIR/webapp/static" "$CIDATA_DIR/webapp/"
echo " Copied webapp/"
fi
# Pip wheels
if [ -d "$SCRIPT_DIR/pip-wheels" ]; then
cp -r "$SCRIPT_DIR/pip-wheels" "$CIDATA_DIR/pip-wheels"
echo " Copied pip-wheels/"
elif [ -d "$SCRIPT_DIR/offline-packages/pip-wheels" ]; then
cp -r "$SCRIPT_DIR/offline-packages/pip-wheels" "$CIDATA_DIR/pip-wheels"
echo " Copied pip-wheels/ (from offline-packages/)"
fi
# WinPE boot files (wimboot, boot.wim, BCD, ipxe.efi, etc.)
if [ -d "$SCRIPT_DIR/boot-files" ]; then
for bf in "$SCRIPT_DIR/boot-files"/*; do
[ -f "$bf" ] && cp "$bf" "$CIDATA_DIR/"
done
echo " Copied boot-files/ (wimboot, boot.wim, ipxe.efi, etc.)"
fi
# Boot tools
if [ -d "$SCRIPT_DIR/boot-tools" ]; then
cp -r "$SCRIPT_DIR/boot-tools" "$CIDATA_DIR/boot-tools"
echo " Copied boot-tools/"
fi
# Generate the CIDATA ISO
genisoimage -output "$CIDATA_ISO" -volid CIDATA -joliet -rock "$CIDATA_DIR" 2>/dev/null
CIDATA_SIZE=$(du -sh "$CIDATA_ISO" | cut -f1)
echo " CIDATA ISO: $CIDATA_ISO ($CIDATA_SIZE)"
rm -rf "$CIDATA_DIR"
# --- Step 2: Create VM disk ---
echo ""
echo "[2/4] Creating VM disk (${VM_DISK_SIZE}GB)..."
if virsh vol-info "$VM_NAME.qcow2" --pool default &>/dev/null; then
echo " Disk already exists. Destroy first with: $0 --destroy"
exit 1
fi
virsh vol-create-as default "${VM_NAME}.qcow2" "${VM_DISK_SIZE}G" --format qcow2
# --- Step 3: Extract kernel/initrd from ISO ---
echo ""
echo "[3/4] Extracting kernel and initrd from ISO..."
KERNEL="/tmp/${VM_NAME}-vmlinuz"
INITRD="/tmp/${VM_NAME}-initrd"
7z e -o/tmp -y "$UBUNTU_ISO" casper/vmlinuz casper/initrd 2>/dev/null
mv /tmp/vmlinuz "$KERNEL"
mv /tmp/initrd "$INITRD"
echo " Extracted vmlinuz and initrd from casper/"
# --- Step 4: Launch VM ---
echo ""
echo "[4/4] Launching VM ($VM_NAME)..."
# Use the default libvirt network (NAT, 192.168.122.0/24) for install access.
# If br-pxe bridge exists, add a second NIC for the isolated PXE switch.
# The Ansible playbook will configure 10.9.100.1/24 on the PXE interface.
PXE_BRIDGE_ARGS=""
if ip link show br-pxe &>/dev/null; then
PXE_BRIDGE_ARGS="--network bridge=br-pxe,model=virtio"
echo " Found br-pxe bridge, adding isolated switch NIC"
fi
virt-install \
--name "$VM_NAME" \
--memory "$VM_RAM" \
--vcpus "$VM_CPUS" \
--disk path="$VM_DISK",format=qcow2 \
--disk path="$UBUNTU_ISO",device=cdrom,readonly=on \
--disk path="$CIDATA_ISO",device=cdrom \
--network network=default \
$PXE_BRIDGE_ARGS \
--os-variant ubuntu24.04 \
--graphics none \
--console pty,target_type=serial \
--install kernel="$KERNEL",initrd="$INITRD",kernel_args="console=ttyS0,115200n8 autoinstall" \
--noautoconsole
echo ""
echo "============================================"
echo "VM launched! The autoinstall will take ~10-15 minutes."
echo "============================================"
echo ""
echo "Watch progress:"
echo " sudo virsh console $VM_NAME"
echo " (Press Ctrl+] to detach)"
echo ""
echo "After install + first boot:"
echo " Console: sudo virsh console $VM_NAME"
echo " Find IP: sudo virsh domifaddr $VM_NAME"
echo " SSH: ssh pxe@<ip-from-above>"
echo ""
echo "NOTE: The Ansible playbook will change the VM's IP to 10.9.100.1."
echo " After that, use 'virsh console' to access the VM."
echo " On the VM, verify with: curl http://localhost:9009"
echo ""
echo "Manage:"
echo " sudo virsh start $VM_NAME"
echo " sudo virsh shutdown $VM_NAME"
echo " $0 --destroy (remove everything)"
echo ""