Reorganize repo, enrollment share taxonomy, Blancco USB-build fixes, v4.10 PPKGs

Workstation reorganization:
- All build/deploy/helper scripts moved into scripts/ (paths updated to use
  REPO_ROOT instead of SCRIPT_DIR so they resolve sibling dirs from the new
  depth)
- New config/ directory placeholder for site-specific overrides
- Removed stale: mok-keys/, test-vm.sh, test-lab.sh, setup-guide-original.txt,
  unattend/ (duplicate of moved playbook/FlatUnattendW10.xml)
- README.md and SETUP.md structure listings updated, dead "Testing with KVM"
  section removed
- .claude/ gitignored

Enrollment share internal taxonomy (forward-looking; existing servers
unaffected since they keep their current boot.wim with flat paths):
- Single SMB share kept (WinPE only mounts one Y: drive), but content now
  organised into ppkgs/, scripts/, config/, shopfloor-setup/, pre-install/{bios,
  installers}, installers-post/cmm/, blancco/, logs/
- README.md deployed to share root explaining each subdir
- New playbook tasks deploy site-config.json + wait-for-internet.ps1 +
  migrate-to-wifi.ps1 explicitly (were ad-hoc on legacy servers)
- BIOS subdir moved into pre-install/bios/, preinstall/ renamed to pre-install/
- startnet.cmd + startnet-template.cmd updated with new Y:\subdir\ paths
- Bumped GCCH PPKG references v4.9 -> v4.10

Blancco USB-build fixes (so next fresh USB install boots Blancco end-to-end
without the manual fixup we did against GOLD):
- grub-blancco.cfg: kernel/initrd switched HTTP -> TFTP (GRUB's HTTP module
  times out on multi-MB files); added modprobe.blacklist=iwlwifi,iwlmvm,btusb
  (WiFi drivers hang udev on Intel business PCs)
- grubx64.efi rebuilt from updated cfg
- Playbook task added to create /srv/tftp/blancco/ symlinks pointing at the
  HTTP-served binaries

run-enrollment.ps1: OOBEComplete is now set AFTER PPKG install (Win11 22H2+
hangs indefinitely if OOBEComplete is set before the bulk-enrollment PPKG runs).

Also includes deploy-bios.sh / pull-bios.sh / busybox-static / models.txt
that were sitting untracked at the repo root.
This commit is contained in:
cproudlock
2026-04-14 16:01:02 -04:00
parent d14c240b48
commit d6776f7c7f
26 changed files with 380 additions and 824 deletions

View File

@@ -0,0 +1,230 @@
#
# Download-Drivers.ps1 — Download selected hardware drivers from GE CDN
#
# Reads user_selections.json and HardwareDriver.json from the MCL cache
# to download only the driver packs for your selected hardware models.
# Bypasses Media Creator Lite's unreliable download mechanism.
#
# Downloads go into the MCL cache structure so Upload-Image.ps1 can
# upload them with -IncludeDrivers.
#
# Usage:
# .\Download-Drivers.ps1 (download all selected models)
# .\Download-Drivers.ps1 -ListOnly (show what would be downloaded)
# .\Download-Drivers.ps1 -CachePath "D:\MCL\Cache" (custom cache location)
# .\Download-Drivers.ps1 -Force (re-download even if already cached)
#
# Requires internet access. Run on the workstation, not the PXE server.
#
param(
[string]$CachePath = "C:\ProgramData\GEAerospace\MediaCreator\Cache",
[switch]$ListOnly,
[switch]$Force
)
function Format-Size {
param([long]$Bytes)
if ($Bytes -ge 1GB) { return "{0:N1} GB" -f ($Bytes / 1GB) }
if ($Bytes -ge 1MB) { return "{0:N1} MB" -f ($Bytes / 1MB) }
return "{0:N0} KB" -f ($Bytes / 1KB)
}
function Resolve-DestDir {
param([string]$Dir)
return ($Dir -replace '^\*destinationdir\*\\?', '')
}
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host " PXE Driver Downloader" -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
Write-Host ""
# --- Validate paths ---
$DeployPath = Join-Path $CachePath "Deploy"
$ControlPath = Join-Path $DeployPath "Control"
$ToolsPath = Join-Path (Split-Path $CachePath -Parent) "Tools"
if (-not (Test-Path $ToolsPath -PathType Container)) {
$ToolsPath = "C:\ProgramData\GEAerospace\MediaCreator\Tools"
}
if (-not (Test-Path $ControlPath -PathType Container)) {
Write-Host "ERROR: Deploy\Control not found at $ControlPath" -ForegroundColor Red
Write-Host " Run Media Creator Lite first to cache the base content." -ForegroundColor Yellow
exit 1
}
# --- Parse user_selections.json ---
$SelectionsFile = Join-Path $ToolsPath "user_selections.json"
if (-not (Test-Path $SelectionsFile)) {
Write-Host "ERROR: user_selections.json not found at $SelectionsFile" -ForegroundColor Red
exit 1
}
$selections = (Get-Content $SelectionsFile -Raw | ConvertFrom-Json)[0]
$selectedOsId = $selections.OperatingSystemSelection
$selectedModelIds = @($selections.HardwareModelSelection | ForEach-Object { $_.Id } | Select-Object -Unique)
# --- Parse HardwareDriver.json ---
$driverJsonFile = Join-Path $ControlPath "HardwareDriver.json"
if (-not (Test-Path $driverJsonFile)) {
Write-Host "ERROR: HardwareDriver.json not found in $ControlPath" -ForegroundColor Red
exit 1
}
$driverJson = Get-Content $driverJsonFile -Raw | ConvertFrom-Json
# --- Match drivers to selections ---
$matchedDrivers = @($driverJson | Where-Object {
$selectedModelIds -contains $_.family -and $_.aOsIds -contains $selectedOsId
})
# Deduplicate by DestinationDir (some models share a driver pack)
$uniqueDrivers = [ordered]@{}
foreach ($drv in $matchedDrivers) {
$rel = Resolve-DestDir $drv.DestinationDir
if (-not $uniqueDrivers.Contains($rel)) {
$uniqueDrivers[$rel] = $drv
}
}
$totalSize = [long]0
$uniqueDrivers.Values | ForEach-Object { $totalSize += $_.size }
# --- Display plan ---
Write-Host " Cache: $CachePath"
Write-Host " OS ID: $selectedOsId"
Write-Host " Models: $($selectedModelIds.Count) selected"
Write-Host " Drivers: $($uniqueDrivers.Count) unique pack(s) ($(Format-Size $totalSize))" -ForegroundColor Cyan
Write-Host ""
if ($uniqueDrivers.Count -eq 0) {
Write-Host "No drivers match your selections." -ForegroundColor Yellow
exit 0
}
# Show each driver pack
$idx = 0
foreach ($rel in $uniqueDrivers.Keys) {
$idx++
$drv = $uniqueDrivers[$rel]
$localDir = Join-Path $CachePath $rel
$cached = Test-Path $localDir -PathType Container
$status = if ($cached -and -not $Force) { "[CACHED]" } else { "[DOWNLOAD]" }
$color = if ($cached -and -not $Force) { "Green" } else { "Yellow" }
Write-Host (" {0,3}. {1,-12} {2} ({3})" -f $idx, $status, $drv.modelsfriendlyname, (Format-Size $drv.size)) -ForegroundColor $color
Write-Host " $($drv.FileName)" -ForegroundColor Gray
}
Write-Host ""
if ($ListOnly) {
Write-Host " (list only — run without -ListOnly to download)" -ForegroundColor Gray
exit 0
}
# --- Download and extract ---
$downloadDir = Join-Path $env:TEMP "PXE-DriverDownloads"
if (-not (Test-Path $downloadDir)) { New-Item -ItemType Directory -Path $downloadDir -Force | Out-Null }
$completed = 0
$skipped = 0
$errors = 0
foreach ($rel in $uniqueDrivers.Keys) {
$drv = $uniqueDrivers[$rel]
$localDir = Join-Path $CachePath $rel
$zipFile = Join-Path $downloadDir $drv.FileName
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "[$($completed + $skipped + $errors + 1)/$($uniqueDrivers.Count)] $($drv.modelsfriendlyname)" -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
# Skip if already cached (unless -Force)
if ((Test-Path $localDir -PathType Container) -and -not $Force) {
Write-Host " Already cached at $rel" -ForegroundColor Green
$skipped++
Write-Host ""
continue
}
# Download
Write-Host " Downloading $(Format-Size $drv.size) ..." -ForegroundColor Gray
Write-Host " URL: $($drv.url)" -ForegroundColor DarkGray
try {
# Use curl.exe for progress display on large files
if (Get-Command curl.exe -ErrorAction SilentlyContinue) {
& curl.exe -L -o $zipFile $drv.url --progress-bar --fail
if ($LASTEXITCODE -ne 0) { throw "curl failed with exit code $LASTEXITCODE" }
} else {
# Fallback: WebClient (streams to disk, no buffering)
$wc = New-Object System.Net.WebClient
$wc.DownloadFile($drv.url, $zipFile)
}
} catch {
Write-Host " ERROR: Download failed - $_" -ForegroundColor Red
$errors++
Write-Host ""
continue
}
# Verify SHA256 hash
Write-Host " Verifying SHA256 hash ..." -ForegroundColor Gray
$actualHash = (Get-FileHash -Path $zipFile -Algorithm SHA256).Hash
if ($actualHash -ne $drv.hash) {
Write-Host " ERROR: Hash mismatch!" -ForegroundColor Red
Write-Host " Expected: $($drv.hash)" -ForegroundColor Red
Write-Host " Got: $actualHash" -ForegroundColor Red
Remove-Item -Path $zipFile -Force -ErrorAction SilentlyContinue
$errors++
Write-Host ""
continue
}
Write-Host " Hash OK." -ForegroundColor Green
# Extract to cache destination
Write-Host " Extracting to $rel ..." -ForegroundColor Gray
if (Test-Path $localDir) { Remove-Item -Recurse -Force $localDir -ErrorAction SilentlyContinue }
New-Item -ItemType Directory -Path $localDir -Force | Out-Null
try {
Expand-Archive -Path $zipFile -DestinationPath $localDir -Force
} catch {
Write-Host " ERROR: Extraction failed - $_" -ForegroundColor Red
$errors++
Write-Host ""
continue
}
# Clean up zip
Remove-Item -Path $zipFile -Force -ErrorAction SilentlyContinue
Write-Host " Done." -ForegroundColor Green
$completed++
Write-Host ""
}
# --- Summary ---
Write-Host "========================================" -ForegroundColor Cyan
Write-Host " Download Summary" -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
Write-Host " Downloaded: $completed" -ForegroundColor Green
if ($skipped -gt 0) { Write-Host " Skipped: $skipped (already cached)" -ForegroundColor Gray }
if ($errors -gt 0) { Write-Host " Failed: $errors" -ForegroundColor Red }
Write-Host ""
if ($completed -gt 0 -or $skipped -gt 0) {
Write-Host "Driver packs are in the MCL cache at:" -ForegroundColor Cyan
Write-Host " $DeployPath\Out-of-box Drivers\" -ForegroundColor White
Write-Host ""
Write-Host "To upload to the PXE server:" -ForegroundColor Cyan
Write-Host " .\Upload-Image.ps1 -IncludeDrivers" -ForegroundColor White
Write-Host ""
}
# Clean up temp dir if empty
if ((Get-ChildItem $downloadDir -Force -ErrorAction SilentlyContinue | Measure-Object).Count -eq 0) {
Remove-Item $downloadDir -Force -ErrorAction SilentlyContinue
}

379
scripts/Upload-Image.ps1 Normal file
View File

@@ -0,0 +1,379 @@
#
# Upload-Image.ps1 — Copy MCL cached image to the PXE server
#
# Reads user_selections.json to upload only the selected OS, matching
# packages, and config files. Drivers are EXCLUDED by default.
#
# Usage:
# .\Upload-Image.ps1 (selected OS + packages, no drivers)
# .\Upload-Image.ps1 -IncludeDrivers (also upload selected hardware drivers)
# .\Upload-Image.ps1 -CachePath "D:\MCL\Cache" (custom cache location)
# .\Upload-Image.ps1 -Server 10.9.100.1 (custom server IP)
#
# After upload, use the PXE webapp (http://10.9.100.1:9009) to import
# the uploaded content into the desired image type.
#
param(
[string]$CachePath = "C:\ProgramData\GEAerospace\MediaCreator\Cache",
[string]$Server = "10.9.100.1",
[string]$User = "pxe-upload",
[string]$Pass = "pxe",
[switch]$IncludeDrivers
)
$Share = "\\$Server\image-upload"
function Format-Size {
param([long]$Bytes)
if ($Bytes -ge 1GB) { return "{0:N1} GB" -f ($Bytes / 1GB) }
if ($Bytes -ge 1MB) { return "{0:N1} MB" -f ($Bytes / 1MB) }
return "{0:N0} KB" -f ($Bytes / 1KB)
}
function Resolve-DestDir {
param([string]$Dir)
return ($Dir -replace '^\*destinationdir\*\\?', '')
}
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host " PXE Server Image Uploader" -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
Write-Host ""
# --- Validate source paths ---
$DeployPath = Join-Path $CachePath "Deploy"
$ToolsPath = Join-Path (Split-Path $CachePath -Parent) "Tools"
if (-not (Test-Path $ToolsPath -PathType Container)) {
$ToolsPath = "C:\ProgramData\GEAerospace\MediaCreator\Tools"
}
$SourcesZip = Join-Path $CachePath "Boot\Sources.zip"
if (-not (Test-Path $DeployPath -PathType Container)) {
Write-Host "ERROR: Deploy directory not found at $DeployPath" -ForegroundColor Red
Write-Host " .\Upload-Image.ps1 -CachePath ""D:\Path\To\Cache""" -ForegroundColor Yellow
exit 1
}
# --- Parse user_selections.json ---
$SelectionsFile = Join-Path $ToolsPath "user_selections.json"
if (-not (Test-Path $SelectionsFile)) {
Write-Host "ERROR: user_selections.json not found at $SelectionsFile" -ForegroundColor Red
Write-Host " Run Media Creator Lite first to create a configuration." -ForegroundColor Yellow
exit 1
}
$selections = (Get-Content $SelectionsFile -Raw | ConvertFrom-Json)[0]
$selectedOsId = $selections.OperatingSystemSelection
$selectedModelIds = @($selections.HardwareModelSelection | ForEach-Object { $_.Id } | Select-Object -Unique)
# --- Parse control JSONs ---
$ControlPath = Join-Path $DeployPath "Control"
$osJsonFile = Join-Path $ControlPath "OperatingSystem.json"
$driverJsonFile = Join-Path $ControlPath "HardwareDriver.json"
$pkgJsonFile = Join-Path $ControlPath "packages.json"
if (-not (Test-Path $osJsonFile)) {
Write-Host "ERROR: OperatingSystem.json not found in $ControlPath" -ForegroundColor Red
exit 1
}
$osJson = Get-Content $osJsonFile -Raw | ConvertFrom-Json
$driverJson = if (Test-Path $driverJsonFile) { Get-Content $driverJsonFile -Raw | ConvertFrom-Json } else { @() }
$pkgJson = if (Test-Path $pkgJsonFile) { Get-Content $pkgJsonFile -Raw | ConvertFrom-Json } else { @() }
# --- Resolve selections to paths ---
# OS: match OperatingSystemSelection ID to OperatingSystem.json entries
$matchedOs = @($osJson | Where-Object { $_.operatingSystemVersion.id -eq [int]$selectedOsId })
$osDirs = @()
$osTotalSize = [long]0
foreach ($os in $matchedOs) {
$rel = Resolve-DestDir $os.operatingSystemVersion.wim.DestinationDir
$osDirs += $rel
$osTotalSize += $os.operatingSystemVersion.wim.size
}
# Packages: enabled + matching OS ID
$matchedPkgs = @($pkgJson | Where-Object { $_.aOsIds -contains $selectedOsId -and $_.enabled -eq 1 })
$pkgTotalSize = [long]0
foreach ($pkg in $matchedPkgs) { $pkgTotalSize += $pkg.size }
# Drivers: match selected model IDs (family) + OS ID, deduplicate by path
$allMatchingDrivers = @($driverJson | Where-Object {
$selectedModelIds -contains $_.family -and $_.aOsIds -contains $selectedOsId
})
$allDriverDirSet = [ordered]@{}
foreach ($drv in $allMatchingDrivers) {
$rel = Resolve-DestDir $drv.DestinationDir
if (-not $allDriverDirSet.Contains($rel)) { $allDriverDirSet[$rel] = $drv.size }
}
$allDriverCount = $allDriverDirSet.Count
$allDriverTotalSize = [long]0
$allDriverDirSet.Values | ForEach-Object { $allDriverTotalSize += $_ }
$driverDirs = @()
$driverTotalSize = [long]0
if ($IncludeDrivers) {
$driverDirs = @($allDriverDirSet.Keys)
$driverTotalSize = $allDriverTotalSize
}
# --- Display upload plan ---
Write-Host " Cache: $CachePath"
Write-Host " Server: $Server"
Write-Host ""
Write-Host " Upload Plan (from user_selections.json):" -ForegroundColor Cyan
Write-Host " ------------------------------------------"
if ($matchedOs.Count -gt 0) {
$osName = $matchedOs[0].operatingSystemVersion.marketingName
Write-Host " OS: $osName ($(Format-Size $osTotalSize))" -ForegroundColor Green
} else {
Write-Host " OS: No match for selection ID $selectedOsId" -ForegroundColor Red
}
Write-Host " Packages: $($matchedPkgs.Count) update(s) ($(Format-Size $pkgTotalSize))" -ForegroundColor Green
if ($IncludeDrivers) {
Write-Host " Drivers: $($driverDirs.Count) model(s) ($(Format-Size $driverTotalSize))" -ForegroundColor Green
} else {
Write-Host " Drivers: SKIPPED -- $allDriverCount available, use -IncludeDrivers" -ForegroundColor Yellow
}
Write-Host " Control: Always included" -ForegroundColor Gray
Write-Host " Tools: $(if (Test-Path $ToolsPath) { 'Yes' } else { 'Not found' })" -ForegroundColor $(if (Test-Path $ToolsPath) { "Gray" } else { "Yellow" })
Write-Host " Sources: $(if (Test-Path $SourcesZip) { 'Yes (from Boot\Sources.zip)' } else { 'Not found' })" -ForegroundColor $(if (Test-Path $SourcesZip) { "Gray" } else { "Yellow" })
Write-Host ""
# --- Connect to SMB share ---
Write-Host "Connecting to $Share ..." -ForegroundColor Gray
net use $Share /delete 2>$null | Out-Null
$netResult = net use $Share /user:$User $Pass 2>&1
if ($LASTEXITCODE -ne 0) {
Write-Host "ERROR: Could not connect to $Share" -ForegroundColor Red
Write-Host $netResult -ForegroundColor Red
Write-Host ""
Write-Host "Make sure:" -ForegroundColor Yellow
Write-Host " - The PXE server is running at $Server" -ForegroundColor Yellow
Write-Host " - This PC is on the 10.9.100.x network" -ForegroundColor Yellow
Write-Host " - Samba is running on the PXE server" -ForegroundColor Yellow
exit 1
}
Write-Host "Connected." -ForegroundColor Green
Write-Host ""
$failed = $false
$stepNum = 0
$totalSteps = 1 # Deploy base always
if ($matchedOs.Count -gt 0) { $totalSteps++ }
if ($matchedPkgs.Count -gt 0) { $totalSteps++ }
if ($IncludeDrivers -and $driverDirs.Count -gt 0) { $totalSteps++ }
if (Test-Path $ToolsPath -PathType Container) { $totalSteps++ }
if (Test-Path $SourcesZip) { $totalSteps++ }
# --- Step: Deploy base (Control, Applications, config -- skip big dirs) ---
$stepNum++
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "[$stepNum/$totalSteps] Copying Deploy\ base (Control, Applications, config) ..." -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
robocopy $DeployPath "$Share\Deploy" /E /XD "Operating Systems" "Out-of-box Drivers" "Packages" /R:3 /W:5 /NP /ETA
if ($LASTEXITCODE -ge 8) {
Write-Host "ERROR: Deploy base copy failed (exit code $LASTEXITCODE)" -ForegroundColor Red
$failed = $true
}
# --- Step: Operating System ---
if ($matchedOs.Count -gt 0) {
$stepNum++
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "[$stepNum/$totalSteps] Copying Operating System ..." -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
foreach ($osDir in $osDirs) {
$src = Join-Path $CachePath $osDir
$dst = Join-Path $Share $osDir
if (Test-Path $src -PathType Container) {
Write-Host " $osDir" -ForegroundColor Gray
robocopy $src $dst /E /R:3 /W:5 /NP /ETA
if ($LASTEXITCODE -ge 8) {
Write-Host "ERROR: OS copy failed (exit code $LASTEXITCODE)" -ForegroundColor Red
$failed = $true
}
} else {
Write-Host " SKIPPED (not cached): $osDir" -ForegroundColor Yellow
}
}
}
# --- Step: Packages ---
if ($matchedPkgs.Count -gt 0) {
$stepNum++
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "[$stepNum/$totalSteps] Copying Packages ..." -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
# Group packages by destination directory for efficient robocopy
$pkgGroups = [ordered]@{}
foreach ($pkg in $matchedPkgs) {
$rel = Resolve-DestDir $pkg.destinationDir
if (-not $pkgGroups.Contains($rel)) { $pkgGroups[$rel] = @() }
$pkgGroups[$rel] += $pkg.fileName
}
foreach ($dir in $pkgGroups.Keys) {
$src = Join-Path $CachePath $dir
$dst = Join-Path $Share $dir
$files = $pkgGroups[$dir]
if (Test-Path $src -PathType Container) {
foreach ($f in $files) { Write-Host " $f" -ForegroundColor Gray }
$robocopyArgs = @($src, $dst) + $files + @("/R:3", "/W:5", "/NP", "/ETA")
& robocopy @robocopyArgs
if ($LASTEXITCODE -ge 8) {
Write-Host "ERROR: Package copy failed (exit code $LASTEXITCODE)" -ForegroundColor Red
$failed = $true
}
} else {
Write-Host " SKIPPED (not cached): $dir" -ForegroundColor Yellow
}
}
}
# --- Step: Drivers (only with -IncludeDrivers) ---
if ($IncludeDrivers -and $driverDirs.Count -gt 0) {
$stepNum++
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "[$stepNum/$totalSteps] Copying Drivers ($($driverDirs.Count) model(s)) ..." -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
$drvCopied = 0
foreach ($drvDir in $driverDirs) {
$drvCopied++
$src = Join-Path $CachePath $drvDir
$dst = Join-Path $Share $drvDir
if (Test-Path $src -PathType Container) {
Write-Host " [$drvCopied/$($driverDirs.Count)] $drvDir" -ForegroundColor Gray
robocopy $src $dst /E /R:3 /W:5 /NP /ETA
if ($LASTEXITCODE -ge 8) {
Write-Host "ERROR: Driver copy failed (exit code $LASTEXITCODE)" -ForegroundColor Red
$failed = $true
}
} else {
Write-Host " SKIPPED (not cached): $drvDir" -ForegroundColor Yellow
}
}
}
# --- Step: Tools ---
if (Test-Path $ToolsPath -PathType Container) {
$stepNum++
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "[$stepNum/$totalSteps] Copying Tools\ ..." -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
robocopy $ToolsPath "$Share\Tools" /E /R:3 /W:5 /NP /ETA
if ($LASTEXITCODE -ge 8) {
Write-Host "ERROR: Tools copy failed (exit code $LASTEXITCODE)" -ForegroundColor Red
$failed = $true
}
}
# --- Step: Sources ---
$TempSources = $null
if (Test-Path $SourcesZip) {
$stepNum++
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "[$stepNum/$totalSteps] Extracting and copying Sources\ ..." -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
$TempExtract = Join-Path $env:TEMP "SourcesExtract"
Remove-Item -Recurse -Force $TempExtract -ErrorAction SilentlyContinue
Expand-Archive $SourcesZip -DestinationPath $TempExtract -Force
# Handle nested Sources folder (zip may contain Sources/ at root)
$TempSources = $TempExtract
if ((Test-Path (Join-Path $TempExtract "Sources")) -and -not (Test-Path (Join-Path $TempExtract "Diskpart"))) {
$TempSources = Join-Path $TempExtract "Sources"
}
robocopy $TempSources "$Share\Sources" /E /R:3 /W:5 /NP /ETA
if ($LASTEXITCODE -ge 8) {
Write-Host "ERROR: Sources copy failed (exit code $LASTEXITCODE)" -ForegroundColor Red
$failed = $true
}
Remove-Item -Recurse -Force $TempExtract -ErrorAction SilentlyContinue
}
# --- Verify small files (SMB write-cache workaround) ---
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host "Verifying small files ..." -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
$fixCount = 0
$verifyPairs = @(
@{ Local = (Join-Path $DeployPath "Control"); Remote = "$Share\Deploy\Control" }
)
if (Test-Path $ToolsPath -PathType Container) {
$verifyPairs += @{ Local = $ToolsPath; Remote = "$Share\Tools" }
}
foreach ($pair in $verifyPairs) {
$localDir = $pair.Local
$remoteDir = $pair.Remote
if (-not (Test-Path $localDir -PathType Container)) { continue }
Get-ChildItem -Path $localDir -Recurse -File -ErrorAction SilentlyContinue |
Where-Object { $_.Length -gt 0 -and $_.Length -lt 1MB } |
ForEach-Object {
$rel = $_.FullName.Substring($localDir.Length)
$dstFile = Join-Path $remoteDir $rel
if (Test-Path $dstFile) {
$dstSize = (Get-Item $dstFile).Length
if ($dstSize -ne $_.Length) {
Write-Host " Fixing: $rel ($dstSize -> $($_.Length) bytes)" -ForegroundColor Yellow
$bytes = [System.IO.File]::ReadAllBytes($_.FullName)
[System.IO.File]::WriteAllBytes($dstFile, $bytes)
$fixCount++
}
}
}
}
if ($fixCount -eq 0) {
Write-Host " All files verified OK." -ForegroundColor Green
} else {
Write-Host " Fixed $fixCount file(s)." -ForegroundColor Yellow
}
# --- Disconnect ---
net use $Share /delete 2>$null | Out-Null
# --- Summary ---
Write-Host ""
if ($failed) {
Write-Host "========================================" -ForegroundColor Red
Write-Host " Upload completed with errors." -ForegroundColor Red
Write-Host "========================================" -ForegroundColor Red
} else {
Write-Host "========================================" -ForegroundColor Green
Write-Host " Upload complete!" -ForegroundColor Green
Write-Host "========================================" -ForegroundColor Green
}
Write-Host ""
Write-Host "Next steps:" -ForegroundColor Cyan
Write-Host " 1. Open the PXE webapp: http://$Server`:9009" -ForegroundColor White
Write-Host " 2. Go to Image Import" -ForegroundColor White
Write-Host " 3. Select source 'image-upload' and target image type" -ForegroundColor White
Write-Host " 4. Click Import" -ForegroundColor White
Write-Host ""

357
scripts/build-proxmox-iso.sh Executable file
View File

@@ -0,0 +1,357 @@
#!/bin/bash
#
# build-proxmox-iso.sh — Build a self-contained PXE server installer ISO for Proxmox
#
# Repackages the Ubuntu 24.04 Server ISO with:
# - Autoinstall configuration (zero-touch install)
# - All offline .deb packages and Python wheels
# - Ansible playbook, Flask webapp, and boot tools
#
# The resulting ISO can be uploaded to Proxmox, attached to a VM, and booted.
# Ubuntu auto-installs, then first-boot configures all PXE services automatically.
#
# Usage:
# ./build-proxmox-iso.sh /path/to/ubuntu-24.04-live-server-amd64.iso [output.iso]
#
# Prerequisites (on build workstation):
# sudo apt install xorriso p7zip-full
#
# Before building, run:
# ./download-packages.sh (downloads offline .debs + pip wheels)
# ./prepare-boot-tools.sh ... (extracts Clonezilla, Blancco, Memtest)
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
AUTOINSTALL_DIR="$REPO_ROOT/autoinstall"
PLAYBOOK_DIR="$REPO_ROOT/playbook"
OFFLINE_PKG_DIR="$REPO_ROOT/offline-packages"
WEBAPP_DIR="$REPO_ROOT/webapp"
PIP_WHEELS_DIR="$REPO_ROOT/pip-wheels"
BOOT_TOOLS_DIR="$REPO_ROOT/boot-tools"
# --- Validate arguments ---
if [ $# -lt 1 ]; then
echo "Usage: $0 /path/to/ubuntu-24.04-live-server-amd64.iso [output.iso]"
echo ""
echo " Creates a self-contained ISO for deploying the PXE server in Proxmox."
echo " The ISO auto-installs Ubuntu and configures all PXE services."
echo ""
echo "Prerequisites:"
echo " sudo apt install xorriso p7zip-full"
exit 1
fi
UBUNTU_ISO="$(realpath "$1")"
OUTPUT_ISO="${2:-$REPO_ROOT/pxe-server-proxmox.iso}"
# --- Validate prerequisites ---
echo "============================================"
echo "PXE Server Proxmox ISO Builder"
echo "============================================"
echo ""
MISSING_CMDS=()
for cmd in xorriso 7z; do
if ! command -v "$cmd" &>/dev/null; then
MISSING_CMDS+=("$cmd")
fi
done
if [ ${#MISSING_CMDS[@]} -gt 0 ]; then
echo "ERROR: Missing required tools: ${MISSING_CMDS[*]}"
echo "Install with: sudo apt install xorriso p7zip-full"
exit 1
fi
if [ ! -f "$UBUNTU_ISO" ]; then
echo "ERROR: ISO not found at $UBUNTU_ISO"
exit 1
fi
# Quick sanity check: ensure it looks like an Ubuntu Server ISO
ISO_CONTENTS=$(7z l "$UBUNTU_ISO" 2>&1) || true
if ! echo "$ISO_CONTENTS" | grep -q "casper/vmlinuz"; then
echo "ERROR: Does not appear to be an Ubuntu Server ISO (missing casper/vmlinuz)"
exit 1
fi
if [ ! -f "$AUTOINSTALL_DIR/user-data" ]; then
echo "ERROR: user-data not found at $AUTOINSTALL_DIR/user-data"
exit 1
fi
if [ ! -f "$PLAYBOOK_DIR/pxe_server_setup.yml" ]; then
echo "ERROR: pxe_server_setup.yml not found at $PLAYBOOK_DIR/"
exit 1
fi
echo "Ubuntu ISO : $UBUNTU_ISO"
echo "Output ISO : $OUTPUT_ISO"
echo "Source Dir : $REPO_ROOT"
echo ""
# --- Setup work directory with cleanup trap ---
WORK_DIR=$(mktemp -d)
cleanup() { rm -rf "$WORK_DIR"; }
trap cleanup EXIT
EXTRACT_DIR="$WORK_DIR/iso"
BOOT_IMG_DIR="$WORK_DIR/BOOT"
# --- Step 1: Extract Ubuntu ISO ---
echo "[1/6] Extracting Ubuntu ISO..."
7z x -o"$EXTRACT_DIR" "$UBUNTU_ISO" -y >/dev/null 2>&1
# 7z extracts [BOOT] directory containing EFI images needed for rebuild
# Move it out so it doesn't end up in the final ISO filesystem
if [ -d "$EXTRACT_DIR/[BOOT]" ]; then
mv "$EXTRACT_DIR/[BOOT]" "$BOOT_IMG_DIR"
echo " Extracted boot images for BIOS + UEFI"
else
echo "ERROR: [BOOT] directory not found in extracted ISO"
echo " The Ubuntu ISO may be corrupted or an unsupported version."
exit 1
fi
# Ensure files are writable (ISO extraction may set read-only)
chmod -R u+w "$EXTRACT_DIR"
# --- Step 2: Generate autoinstall user-data ---
echo "[2/6] Generating autoinstall configuration..."
mkdir -p "$EXTRACT_DIR/server"
touch "$EXTRACT_DIR/server/meta-data"
# Reuse the common sections (identity, network, storage, SSH) from existing user-data
# and replace late-commands with ISO-specific versions
sed '/^ late-commands:/,$d' "$AUTOINSTALL_DIR/user-data" > "$EXTRACT_DIR/server/user-data"
# Append ISO-specific late-commands
cat >> "$EXTRACT_DIR/server/user-data" << 'LATE_COMMANDS'
late-commands:
# Copy project files from ISO (/cdrom/pxe-data/) to the installed system
- mkdir -p /target/opt/pxe-setup
- cp -r /cdrom/pxe-data/packages /target/opt/pxe-setup/ 2>/dev/null || true
- cp -r /cdrom/pxe-data/playbook /target/opt/pxe-setup/ 2>/dev/null || true
- cp -r /cdrom/pxe-data/webapp /target/opt/pxe-setup/ 2>/dev/null || true
- cp -r /cdrom/pxe-data/pip-wheels /target/opt/pxe-setup/ 2>/dev/null || true
- cp -r /cdrom/pxe-data/boot-tools /target/opt/pxe-setup/ 2>/dev/null || true
# Copy boot files (wimboot, boot.wim, BCD, ipxe.efi, etc.) from pxe-data root
- sh -c 'for f in /cdrom/pxe-data/*; do [ -f "$f" ] && cp "$f" /target/opt/pxe-setup/; done' || true
# Install deb packages in target chroot
- |
curtin in-target --target=/target -- bash -c '
if compgen -G "/opt/pxe-setup/packages/*.deb" > /dev/null; then
dpkg -i /opt/pxe-setup/packages/*.deb 2>/dev/null || true
dpkg -i /opt/pxe-setup/packages/*.deb 2>/dev/null || true
if command -v nmcli >/dev/null; then
systemctl enable NetworkManager
fi
fi
'
# Create first-boot script (reads from local /opt/pxe-setup/)
- |
curtin in-target --target=/target -- bash -c '
cat <<"EOF" > /opt/first-boot.sh
#!/bin/bash
SRC=/opt/pxe-setup
# Install all offline .deb packages
if compgen -G "$SRC/packages/*.deb" > /dev/null; then
dpkg -i $SRC/packages/*.deb 2>/dev/null || true
dpkg -i $SRC/packages/*.deb 2>/dev/null || true
fi
# Run the Ansible playbook (override USB paths to local source)
if [ -f $SRC/playbook/pxe_server_setup.yml ]; then
cd $SRC/playbook
ansible-playbook -i localhost, -c local pxe_server_setup.yml \
-e usb_root=$SRC -e usb_mount=$SRC/playbook
fi
# Disable rc.local to prevent rerunning
sed -i "s|^/opt/first-boot.sh.*|# &|" /etc/rc.local
lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv || true
# Clean up large setup files to save disk space
rm -rf $SRC/packages $SRC/pip-wheels $SRC/boot-tools
rm -f $SRC/boot.wim $SRC/boot.sdi $SRC/bootx64.efi $SRC/wimboot $SRC/ipxe.efi $SRC/BCD $SRC/boot.stl
EOF
'
- curtin in-target --target=/target -- chmod +x /opt/first-boot.sh
# Create rc.local to run first-boot on next startup
- |
curtin in-target --target=/target -- bash -c '
cat <<"EOF" > /etc/rc.local
#!/bin/bash
/opt/first-boot.sh > /var/log/first-boot.log 2>&1 &
exit 0
EOF
'
- curtin in-target --target=/target -- chmod +x /etc/rc.local
user-data:
disable_root: false
refresh-installer:
update: no
LATE_COMMANDS
echo " Generated server/user-data and server/meta-data"
# --- Step 3: Copy project files to pxe-data/ ---
echo "[3/6] Copying project files to ISO..."
PXE_DATA="$EXTRACT_DIR/pxe-data"
mkdir -p "$PXE_DATA"
# Offline .deb packages
if [ -d "$OFFLINE_PKG_DIR" ]; then
mkdir -p "$PXE_DATA/packages"
DEB_COUNT=0
for deb in "$OFFLINE_PKG_DIR"/*.deb; do
if [ -f "$deb" ]; then
cp "$deb" "$PXE_DATA/packages/"
DEB_COUNT=$((DEB_COUNT + 1))
fi
done
echo " Copied $DEB_COUNT .deb packages"
else
echo " WARNING: No offline-packages/ directory. Run download-packages.sh first."
fi
# Ansible playbook
mkdir -p "$PXE_DATA/playbook"
cp "$PLAYBOOK_DIR/"* "$PXE_DATA/playbook/" 2>/dev/null || true
echo " Copied playbook/"
# Flask webapp
if [ -d "$WEBAPP_DIR" ]; then
mkdir -p "$PXE_DATA/webapp"
cp "$WEBAPP_DIR/app.py" "$WEBAPP_DIR/requirements.txt" "$PXE_DATA/webapp/"
cp -r "$WEBAPP_DIR/templates" "$WEBAPP_DIR/static" "$PXE_DATA/webapp/"
echo " Copied webapp/"
fi
# Python wheels
if [ ! -d "$PIP_WHEELS_DIR" ]; then
echo " pip-wheels/ not found — downloading now..."
mkdir -p "$PIP_WHEELS_DIR"
if pip3 download -d "$PIP_WHEELS_DIR" flask lxml 2>/dev/null; then
echo " Downloaded pip wheels successfully."
else
echo " WARNING: Failed to download pip wheels (no internet?)"
rmdir "$PIP_WHEELS_DIR" 2>/dev/null || true
fi
fi
if [ -d "$PIP_WHEELS_DIR" ]; then
cp -r "$PIP_WHEELS_DIR" "$PXE_DATA/pip-wheels"
WHEEL_COUNT=$(find "$PIP_WHEELS_DIR" -name '*.whl' | wc -l)
echo " Copied pip-wheels/ ($WHEEL_COUNT wheels)"
fi
# WinPE boot files (wimboot, boot.wim, BCD, ipxe.efi, etc.)
BOOT_FILES_DIR="$REPO_ROOT/boot-files"
if [ -d "$BOOT_FILES_DIR" ]; then
BOOT_FILE_COUNT=0
for bf in "$BOOT_FILES_DIR"/*; do
if [ -f "$bf" ]; then
cp "$bf" "$PXE_DATA/"
BOOT_FILE_COUNT=$((BOOT_FILE_COUNT + 1))
fi
done
BOOT_FILES_SIZE=$(du -sh "$BOOT_FILES_DIR" | cut -f1)
echo " Copied $BOOT_FILE_COUNT boot files ($BOOT_FILES_SIZE) — wimboot, boot.wim, ipxe.efi, etc."
else
echo " WARNING: No boot-files/ found (copy WinPE boot files from Media Creator)"
fi
# Boot tools (Clonezilla, Blancco, Memtest)
if [ -d "$BOOT_TOOLS_DIR" ]; then
cp -r "$BOOT_TOOLS_DIR" "$PXE_DATA/boot-tools"
TOOLS_SIZE=$(du -sh "$PXE_DATA/boot-tools" | cut -f1)
echo " Copied boot-tools/ ($TOOLS_SIZE)"
else
echo " No boot-tools/ found (run prepare-boot-tools.sh first)"
fi
# --- Step 4: Modify GRUB for autoinstall ---
echo "[4/6] Configuring autoinstall boot..."
GRUB_CFG="$EXTRACT_DIR/boot/grub/grub.cfg"
if [ ! -f "$GRUB_CFG" ]; then
echo "ERROR: boot/grub/grub.cfg not found in extracted ISO"
exit 1
fi
# Add autoinstall kernel parameter with nocloud datasource pointing to /cdrom/server/
# The semicolon must be escaped as \; in GRUB (it's a command separator)
# Apply to both regular and HWE kernels
sed -i 's|/casper/vmlinuz\b|/casper/vmlinuz autoinstall ds=nocloud\\;s=/cdrom/server/|g' "$GRUB_CFG"
sed -i 's|/casper/hwe-vmlinuz\b|/casper/hwe-vmlinuz autoinstall ds=nocloud\\;s=/cdrom/server/|g' "$GRUB_CFG"
# Reduce timeout for automatic boot (1 second instead of default 30)
sed -i 's/set timeout=.*/set timeout=1/' "$GRUB_CFG"
echo " Modified GRUB: autoinstall enabled, timeout=1s"
# --- Step 5: Rebuild ISO ---
echo "[5/6] Rebuilding ISO (this may take a few minutes)..."
# Verify required boot images exist
EFI_IMG="$BOOT_IMG_DIR/2-Boot-NoEmul.img"
if [ ! -f "$EFI_IMG" ]; then
echo "ERROR: EFI boot image not found at $EFI_IMG"
exit 1
fi
if [ ! -f "$EXTRACT_DIR/boot/grub/i386-pc/eltorito.img" ]; then
echo "ERROR: BIOS boot image not found at boot/grub/i386-pc/eltorito.img"
exit 1
fi
xorriso -as mkisofs -r \
-V 'PXE-SERVER' \
-o "$OUTPUT_ISO" \
--grub2-mbr --interval:local_fs:0s-15s:zero_mbrpt,zero_gpt:"$UBUNTU_ISO" \
--protective-msdos-label \
-partition_cyl_align off \
-partition_offset 16 \
--mbr-force-bootable \
-append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b "$EFI_IMG" \
-appended_part_as_gpt \
-iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 \
-c '/boot.catalog' \
-b '/boot/grub/i386-pc/eltorito.img' \
-no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info \
-eltorito-alt-boot \
-e '--interval:appended_partition_2:::' \
-no-emul-boot \
"$EXTRACT_DIR"
# --- Step 6: Done ---
echo "[6/6] Cleaning up..."
ISO_SIZE=$(du -sh "$OUTPUT_ISO" | cut -f1)
echo ""
echo "============================================"
echo "Proxmox ISO build complete!"
echo "============================================"
echo ""
echo "Output: $OUTPUT_ISO ($ISO_SIZE)"
echo ""
echo "Proxmox deployment:"
echo " 1. Upload ISO to Proxmox storage (Datacenter -> Storage -> ISO Images)"
echo " 2. Create a new VM:"
echo " - BIOS: OVMF (UEFI) — or SeaBIOS (both work)"
echo " - Memory: 4096 MB"
echo " - CPU: 2+ cores"
echo " - Disk: 40+ GB (VirtIO SCSI)"
echo " - Network: Bridge connected to isolated PXE network"
echo " 3. Attach ISO as CD-ROM and start the VM"
echo " 4. Ubuntu auto-installs (~10-15 minutes, zero interaction)"
echo " 5. After reboot, first-boot configures all PXE services"
echo " 6. Access webapp at http://10.9.100.1:9009"
echo ""
echo "NOTE: The VM's network bridge must be connected to your isolated PXE"
echo " network. The server will use static IP 10.9.100.1/24."
echo ""

373
scripts/build-usb.sh Executable file
View File

@@ -0,0 +1,373 @@
#!/bin/bash
#
# build-usb.sh — Build a bootable PXE-server installer USB
#
# Creates a two-partition USB:
# Partition 1: Ubuntu Server 24.04 installer (ISO contents)
# Partition 2: CIDATA volume (autoinstall config, .debs, playbook)
#
# The target machine boots from this USB, Ubuntu auto-installs with
# cloud-init (user-data/meta-data from CIDATA), installs offline .debs,
# and on first boot runs the Ansible playbook to configure PXE services.
#
# Usage:
# sudo ./build-usb.sh /dev/sdX /path/to/ubuntu-24.04-live-server-amd64.iso
#
# WARNING: This will ERASE the target USB device.
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
AUTOINSTALL_DIR="$REPO_ROOT/autoinstall"
PLAYBOOK_DIR="$REPO_ROOT/playbook"
OFFLINE_PKG_DIR="$REPO_ROOT/offline-packages"
# --- Validate arguments ---
if [ $# -lt 2 ]; then
echo "Usage: sudo $0 /dev/sdX /path/to/ubuntu-24.04.iso [/path/to/winpe-images]"
echo ""
echo " The optional third argument is the path to WinPE deployment content"
echo " (e.g., the mounted Media Creator LITE USB). If provided, the images"
echo " will be bundled onto the CIDATA partition for automatic import."
echo ""
echo "Available removable devices:"
lsblk -d -o NAME,SIZE,TRAN,RM | grep -E '^\S+\s+\S+\s+(usb)\s+1'
exit 1
fi
USB_DEV="$1"
ISO_PATH="$2"
WINPE_SOURCE="${3:-}"
# Safety checks
if [ "$(id -u)" -ne 0 ]; then
echo "ERROR: Must run as root (sudo)."
exit 1
fi
if [ ! -b "$USB_DEV" ]; then
echo "ERROR: $USB_DEV is not a block device."
exit 1
fi
if [ ! -f "$ISO_PATH" ]; then
echo "ERROR: ISO not found at $ISO_PATH"
exit 1
fi
# Verify it's a removable device (safety against wiping system disks)
REMOVABLE=$(lsblk -nd -o RM "$USB_DEV" 2>/dev/null || echo "0")
if [ "$REMOVABLE" != "1" ]; then
echo "WARNING: $USB_DEV does not appear to be a removable device."
read -rp "Are you SURE you want to erase $USB_DEV? (type YES): " CONFIRM
if [ "$CONFIRM" != "YES" ]; then
echo "Aborted."
exit 1
fi
fi
# Verify required source files exist
if [ ! -f "$AUTOINSTALL_DIR/user-data" ]; then
echo "ERROR: user-data not found at $AUTOINSTALL_DIR/user-data"
exit 1
fi
if [ ! -f "$AUTOINSTALL_DIR/meta-data" ]; then
echo "ERROR: meta-data not found at $AUTOINSTALL_DIR/meta-data"
exit 1
fi
if [ ! -f "$PLAYBOOK_DIR/pxe_server_setup.yml" ]; then
echo "ERROR: pxe_server_setup.yml not found at $PLAYBOOK_DIR/"
exit 1
fi
echo "============================================"
echo "PXE Server USB Builder"
echo "============================================"
echo "USB Device : $USB_DEV"
echo "ISO : $ISO_PATH"
echo "Source Dir : $REPO_ROOT"
echo ""
echo "This will ERASE all data on $USB_DEV."
read -rp "Continue? (y/N): " PROCEED
if [[ ! "$PROCEED" =~ ^[Yy]$ ]]; then
echo "Aborted."
exit 1
fi
# --- Unmount any existing partitions ---
echo ""
echo "[1/6] Unmounting existing partitions on $USB_DEV..."
for part in "${USB_DEV}"*; do
umount "$part" 2>/dev/null || true
done
# --- Rebuild ISO with 'autoinstall' kernel parameter ---
echo "[2/6] Rebuilding ISO with autoinstall kernel parameter..."
ISO_WORK=$(mktemp -d)
7z -y x "$ISO_PATH" -o"$ISO_WORK/iso" >/dev/null 2>&1
mv "$ISO_WORK/iso/[BOOT]" "$ISO_WORK/BOOT"
chmod -R u+w "$ISO_WORK/iso"
# Patch grub.cfg: add 'autoinstall' to kernel cmdline, reduce timeout
sed -i 's|linux\t/casper/vmlinuz ---|linux\t/casper/vmlinuz autoinstall ---|' "$ISO_WORK/iso/boot/grub/grub.cfg"
sed -i 's/^set timeout=30/set timeout=5/' "$ISO_WORK/iso/boot/grub/grub.cfg"
PATCHED_ISO="$ISO_WORK/patched.iso"
cd "$ISO_WORK/iso"
xorriso -as mkisofs -r \
-V 'Ubuntu-Server 24.04.3 LTS amd64' \
-o "$PATCHED_ISO" \
--grub2-mbr ../BOOT/1-Boot-NoEmul.img \
--protective-msdos-label \
-partition_cyl_align off \
-partition_offset 16 \
--mbr-force-bootable \
-append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b ../BOOT/2-Boot-NoEmul.img \
-appended_part_as_gpt \
-iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 \
-c '/boot.catalog' \
-b '/boot/grub/i386-pc/eltorito.img' \
-no-emul-boot \
-boot-load-size 4 \
-boot-info-table \
--grub2-boot-info \
-eltorito-alt-boot \
-e '--interval:appended_partition_2:::' \
-no-emul-boot \
-boot-load-size 10160 \
. 2>/dev/null
cd "$REPO_ROOT"
echo " ISO rebuilt with 'autoinstall' kernel param and 5s GRUB timeout"
echo " Writing patched ISO to $USB_DEV..."
dd if="$PATCHED_ISO" of="$USB_DEV" bs=4M status=progress oflag=sync
sync
ISO_SIZE=$(stat -c%s "$PATCHED_ISO")
rm -rf "$ISO_WORK"
# --- Find the end of the ISO to create CIDATA partition ---
echo "[3/6] Creating CIDATA partition after ISO data..."
# Get ISO size in bytes and calculate the start sector for the new partition
SECTOR_SIZE=512
# Start the CIDATA partition 1MB after the ISO ends (alignment)
START_SECTOR=$(( (ISO_SIZE / SECTOR_SIZE) + 2048 ))
# Use sfdisk to append a new partition
echo " ISO size: $((ISO_SIZE / 1024 / 1024)) MB"
echo " CIDATA partition starts at sector $START_SECTOR"
# Add a new partition using sfdisk --append
echo "${START_SECTOR},+,L" | sfdisk --append "$USB_DEV" --no-reread 2>/dev/null || true
partprobe "$USB_DEV"
sleep 2
# Determine the new partition name (could be sdX3, sdX4, etc.)
CIDATA_PART=""
for part in "${USB_DEV}"[0-9]*; do
# Find the partition that starts at or after our start sector
PART_START=$(sfdisk -d "$USB_DEV" 2>/dev/null | grep "$part" | grep -o 'start=[[:space:]]*[0-9]*' | grep -o '[0-9]*')
if [ -n "$PART_START" ] && [ "$PART_START" -ge "$START_SECTOR" ]; then
CIDATA_PART="$part"
break
fi
done
# Fallback: use the last partition
if [ -z "$CIDATA_PART" ]; then
CIDATA_PART=$(lsblk -ln -o NAME "$USB_DEV" | tail -1)
CIDATA_PART="/dev/$CIDATA_PART"
fi
echo " CIDATA partition: $CIDATA_PART"
# --- Format CIDATA partition ---
echo "[4/6] Formatting $CIDATA_PART as FAT32 (label: CIDATA)..."
mkfs.vfat -F 32 -n CIDATA "$CIDATA_PART"
# --- Mount and copy files ---
echo "[5/6] Copying autoinstall config, packages, and playbook to CIDATA..."
MOUNT_POINT=$(mktemp -d)
mount "$CIDATA_PART" "$MOUNT_POINT"
# Copy cloud-init files
cp "$AUTOINSTALL_DIR/user-data" "$MOUNT_POINT/"
cp "$AUTOINSTALL_DIR/meta-data" "$MOUNT_POINT/"
# Copy offline .deb packages into packages/ subdirectory
mkdir -p "$MOUNT_POINT/packages"
DEB_COUNT=0
if [ -d "$OFFLINE_PKG_DIR" ]; then
for deb in "$OFFLINE_PKG_DIR"/*.deb; do
if [ -f "$deb" ]; then
cp "$deb" "$MOUNT_POINT/packages/"
DEB_COUNT=$((DEB_COUNT + 1))
fi
done
fi
echo " Copied $DEB_COUNT .deb packages to packages/"
# Copy playbook directory
cp -r "$PLAYBOOK_DIR" "$MOUNT_POINT/playbook"
echo " Copied playbook/"
# Copy webapp
WEBAPP_DIR="$REPO_ROOT/webapp"
if [ -d "$WEBAPP_DIR" ]; then
mkdir -p "$MOUNT_POINT/webapp"
cp -r "$WEBAPP_DIR/app.py" "$WEBAPP_DIR/requirements.txt" "$MOUNT_POINT/webapp/"
cp -r "$WEBAPP_DIR/templates" "$WEBAPP_DIR/static" "$MOUNT_POINT/webapp/"
echo " Copied webapp/"
fi
# Copy pip wheels for offline Flask install
PIP_WHEELS_DIR="$REPO_ROOT/pip-wheels"
if [ ! -d "$PIP_WHEELS_DIR" ]; then
echo " pip-wheels/ not found — downloading now..."
mkdir -p "$PIP_WHEELS_DIR"
if pip3 download -d "$PIP_WHEELS_DIR" flask lxml 2>/dev/null; then
echo " Downloaded pip wheels successfully."
else
echo " WARNING: Failed to download pip wheels (no internet?)"
echo " The PXE server will need internet to install Flask later,"
echo " or manually copy wheels to pip-wheels/ and rebuild."
rmdir "$PIP_WHEELS_DIR" 2>/dev/null || true
fi
fi
if [ -d "$PIP_WHEELS_DIR" ]; then
cp -r "$PIP_WHEELS_DIR" "$MOUNT_POINT/pip-wheels"
WHEEL_COUNT=$(find "$PIP_WHEELS_DIR" -name '*.whl' | wc -l)
echo " Copied pip-wheels/ ($WHEEL_COUNT wheels)"
fi
# Copy WinPE boot files (wimboot, boot.wim, BCD, ipxe.efi, etc.)
BOOT_FILES_DIR="$REPO_ROOT/boot-files"
if [ -d "$BOOT_FILES_DIR" ]; then
BOOT_FILE_COUNT=0
for bf in "$BOOT_FILES_DIR"/*; do
if [ -f "$bf" ]; then
cp "$bf" "$MOUNT_POINT/"
BOOT_FILE_COUNT=$((BOOT_FILE_COUNT + 1))
fi
done
BOOT_FILES_SIZE=$(du -sh "$BOOT_FILES_DIR" | cut -f1)
echo " Copied $BOOT_FILE_COUNT boot files ($BOOT_FILES_SIZE) — wimboot, boot.wim, ipxe.efi, etc."
else
echo " WARNING: No boot-files/ found (copy WinPE boot files from Media Creator)"
fi
# Copy boot tools (Clonezilla, Blancco, Memtest) if prepared
BOOT_TOOLS_DIR="$REPO_ROOT/boot-tools"
if [ -d "$BOOT_TOOLS_DIR" ]; then
cp -r "$BOOT_TOOLS_DIR" "$MOUNT_POINT/boot-tools"
TOOLS_SIZE=$(du -sh "$MOUNT_POINT/boot-tools" | cut -f1)
echo " Copied boot-tools/ ($TOOLS_SIZE)"
else
echo " No boot-tools/ found (run prepare-boot-tools.sh first)"
fi
# Copy enrollment directory (PPKGs, run-enrollment.ps1) if present
# FAT32 has a 4GB max file size; files larger than that are split into chunks
# that the playbook reassembles with `cat`.
ENROLLMENT_DIR="$REPO_ROOT/enrollment"
FAT32_MAX=$((3500 * 1024 * 1024)) # 3500 MiB chunks, safely under 4GiB FAT32 limit
if [ -d "$ENROLLMENT_DIR" ]; then
mkdir -p "$MOUNT_POINT/enrollment"
SPLIT_COUNT=0
for f in "$ENROLLMENT_DIR"/*; do
[ -e "$f" ] || continue
bn="$(basename "$f")"
if [ -f "$f" ] && [ "$(stat -c%s "$f")" -gt "$FAT32_MAX" ]; then
echo " Splitting $bn (>$((FAT32_MAX / 1024 / 1024))M) into chunks..."
split -b "$FAT32_MAX" -d -a 2 "$f" "$MOUNT_POINT/enrollment/${bn}.part."
SPLIT_COUNT=$((SPLIT_COUNT + 1))
else
cp -r "$f" "$MOUNT_POINT/enrollment/"
fi
done
PPKG_COUNT=$(find "$ENROLLMENT_DIR" -maxdepth 1 -name '*.ppkg' 2>/dev/null | wc -l)
ENROLL_SIZE=$(du -sh "$MOUNT_POINT/enrollment" | cut -f1)
echo " Copied enrollment/ ($ENROLL_SIZE, $PPKG_COUNT PPKGs, $SPLIT_COUNT split)"
else
echo " No enrollment/ directory found (PPKGs can be uploaded via webapp later)"
fi
# Copy BIOS update binaries if staged
BIOS_DIR="$REPO_ROOT/bios-staging"
if [ -d "$BIOS_DIR" ] && [ "$(ls -A "$BIOS_DIR" 2>/dev/null)" ]; then
echo " Copying BIOS update binaries from bios-staging/..."
mkdir -p "$MOUNT_POINT/bios"
cp -r "$BIOS_DIR"/* "$MOUNT_POINT/bios/" 2>/dev/null || true
BIOS_COUNT=$(find "$MOUNT_POINT/bios" -name '*.exe' 2>/dev/null | wc -l)
BIOS_SIZE=$(du -sh "$MOUNT_POINT/bios" | cut -f1)
echo " Copied bios/ ($BIOS_SIZE, $BIOS_COUNT files)"
else
echo " No bios-staging/ found (BIOS updates can be pushed via download-drivers.py later)"
fi
# Copy Dell driver packs if staged
# Files larger than the FAT32 4GB limit are split into chunks; the playbook
# reassembles them on the server.
DRIVERS_DIR="$REPO_ROOT/drivers-staging"
if [ -d "$DRIVERS_DIR" ] && [ "$(ls -A "$DRIVERS_DIR" 2>/dev/null)" ]; then
echo " Copying Dell driver packs from drivers-staging/..."
mkdir -p "$MOUNT_POINT/drivers"
DRV_SPLIT=0
# Mirror directory tree first (fast)
(cd "$DRIVERS_DIR" && find . -type d -exec mkdir -p "$MOUNT_POINT/drivers/{}" \;)
# Copy files <4GB directly, split files >=4GB into chunks
while IFS= read -r f; do
rel="${f#$DRIVERS_DIR/}"
dest="$MOUNT_POINT/drivers/$rel"
if [ "$(stat -c%s "$f")" -gt "$FAT32_MAX" ]; then
echo " Splitting $rel..."
split -b "$FAT32_MAX" -d -a 2 "$f" "${dest}.part."
DRV_SPLIT=$((DRV_SPLIT + 1))
else
cp "$f" "$dest"
fi
done < <(find "$DRIVERS_DIR" -type f)
DRIVERS_SIZE=$(du -sh "$MOUNT_POINT/drivers" | cut -f1)
echo " Copied drivers/ ($DRIVERS_SIZE, $DRV_SPLIT split)"
else
echo " No drivers-staging/ found (drivers can be downloaded later)"
fi
# Optionally copy WinPE deployment images
if [ -n "$WINPE_SOURCE" ] && [ -d "$WINPE_SOURCE" ]; then
echo " Copying WinPE deployment content from $WINPE_SOURCE..."
mkdir -p "$MOUNT_POINT/images"
cp -r "$WINPE_SOURCE"/* "$MOUNT_POINT/images/" 2>/dev/null || true
IMG_SIZE=$(du -sh "$MOUNT_POINT/images" | cut -f1)
echo " Copied WinPE images ($IMG_SIZE)"
elif [ -n "$WINPE_SOURCE" ]; then
echo " WARNING: WinPE source path not found: $WINPE_SOURCE (skipping)"
fi
# List what's on CIDATA
echo ""
echo " CIDATA contents:"
ls -lh "$MOUNT_POINT/" | sed 's/^/ /'
# --- Cleanup ---
echo ""
echo "[6/6] Syncing and unmounting..."
sync
umount "$MOUNT_POINT"
rmdir "$MOUNT_POINT"
echo ""
echo "============================================"
echo "USB build complete!"
echo "============================================"
echo ""
echo "Next steps:"
echo " 1. Insert USB into target machine"
echo " 2. Boot from USB (F12 / boot menu)"
echo " 3. Ubuntu will auto-install and configure the PXE server"
echo " 4. After reboot, move the NIC to the isolated PXE network"
echo ""

50
scripts/deploy-bios.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/bin/bash
# deploy-bios.sh - Deploy BIOS update files to a running PXE server
# Copies Flash64W.exe, BIOS binaries, models.txt, and check-bios.cmd
#
# Usage: ./deploy-bios.sh [server-ip]
# Default server: 10.9.100.1
set -e
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
PXE_SERVER="${1:-10.9.100.1}"
PXE_USER="pxe"
PXE_PASS="pxe"
REMOTE_DIR="/srv/samba/enrollment/BIOS"
BIOS_DIR="$REPO_ROOT/bios-staging"
MANIFEST="$REPO_ROOT/playbook/shopfloor-setup/BIOS/models.txt"
CHECK_SCRIPT="$REPO_ROOT/playbook/shopfloor-setup/BIOS/check-bios.cmd"
SSH="sshpass -p $PXE_PASS ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 $PXE_USER@$PXE_SERVER"
SCP="sshpass -p $PXE_PASS scp -o StrictHostKeyChecking=no -o ConnectTimeout=10"
# Verify sources exist
if [ ! -d "$BIOS_DIR" ] || [ -z "$(ls -A "$BIOS_DIR" 2>/dev/null)" ]; then
echo "ERROR: bios-staging/ is empty or missing. Run ./pull-bios.sh first."
exit 1
fi
if [ ! -f "$MANIFEST" ]; then
echo "ERROR: playbook/shopfloor-setup/BIOS/models.txt not found."
exit 1
fi
echo "Deploying BIOS files to $PXE_SERVER..."
# Create remote directory
$SSH "sudo mkdir -p '$REMOTE_DIR' && sudo chown $PXE_USER:$PXE_USER '$REMOTE_DIR'"
# Copy check-bios.cmd and models.txt
echo " Copying check-bios.cmd + models.txt..."
$SCP "$CHECK_SCRIPT" "$MANIFEST" "$PXE_USER@$PXE_SERVER:$REMOTE_DIR/"
# Copy BIOS binaries
COUNT=$(find "$BIOS_DIR" -name '*.exe' | wc -l)
SIZE=$(du -sh "$BIOS_DIR" | cut -f1)
echo " Copying $COUNT BIOS binaries ($SIZE)..."
$SCP "$BIOS_DIR"/*.exe "$PXE_USER@$PXE_SERVER:$REMOTE_DIR/"
# Verify
REMOTE_COUNT=$($SSH "find '$REMOTE_DIR' -name '*.exe' | wc -l")
echo "Done: $REMOTE_COUNT files on $PXE_SERVER:$REMOTE_DIR"

814
scripts/download-drivers.py Executable file
View File

@@ -0,0 +1,814 @@
#!/usr/bin/env python3
"""
download-drivers.py — Download Dell drivers (+ BIOS) and push to PXE server
Downloads driver packs directly from Dell's public catalog (downloads.dell.com).
Matches models from user_selections.json / HardwareDriver.json against Dell's
DriverPackCatalog. No GE network or Media Creator Lite required.
Usage:
./download-drivers.py # download + push selected drivers
./download-drivers.py --list # preview without downloading
./download-drivers.py --bios # also download BIOS updates
./download-drivers.py --image gea-standard # push directly to an image
./download-drivers.py --force # re-download even if on server
./download-drivers.py --parallel 4 # process 4 packs concurrently
Requires: curl, 7z, sshpass, rsync
"""
import argparse
import concurrent.futures
import hashlib
import json
import os
import re
import subprocess
import sys
import tempfile
import threading
import xml.etree.ElementTree as ET
from pathlib import Path
REPO_DIR = Path(__file__).resolve().parent
PXE_HOST = "10.9.100.1"
PXE_USER = "pxe"
PXE_PASS = "pxe"
UPLOAD_DEST = "/home/pxe/image-upload"
IMAGE_BASE = "/srv/samba/winpeapps"
DELL_DRIVER_CATALOG = "https://downloads.dell.com/catalog/DriverPackCatalog.cab"
DELL_BIOS_CATALOG = "https://downloads.dell.com/catalog/DellSDPCatalogPC.cab"
DELL_BASE = "https://downloads.dell.com"
NS = {"d": "openmanage/cm/dm"}
SDP_CAT_NS = "http://schemas.microsoft.com/sms/2005/04/CorporatePublishing/SystemsManagementCatalog.xsd"
SDP_PKG_NS = "http://schemas.microsoft.com/wsus/2005/04/CorporatePublishing/SoftwareDistributionPackage.xsd"
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def format_size(n):
if n >= 1024**3: return f"{n / 1024**3:.1f} GB"
if n >= 1024**2: return f"{n / 1024**2:.0f} MB"
return f"{n / 1024:.0f} KB"
def resolve_dest_dir(d):
"""Convert *destinationdir*\\Deploy\\... to Deploy/..."""
return d.replace("*destinationdir*\\", "").replace("*destinationdir*", "").replace("\\", "/")
def ssh_cmd(host, cmd):
return subprocess.run(
["sshpass", "-p", PXE_PASS, "ssh", "-o", "StrictHostKeyChecking=no",
"-o", "LogLevel=ERROR", f"{PXE_USER}@{host}", cmd],
capture_output=True, text=True)
def verify_sha256(filepath, expected):
sha = hashlib.sha256()
with open(filepath, "rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest().upper() == expected.upper()
def extract_model_ids(name):
"""Extract model identifiers like '5450', 'PC14250', 'QCM1250'."""
ids = set(re.findall(r'\b([A-Z]*\d[\w]{2,})\b', name, re.I))
# Dell uses Qx* codenames where GE uses QC*/QB* (e.g. QxM1250 = QCM1250)
extras = set()
for mid in ids:
if re.match(r'^Q[A-Z][A-Z]\d', mid, re.I):
extras.add("Qx" + mid[2:]) # QCM1250 -> QxM1250
elif re.match(r'^Qx[A-Z]\d', mid, re.I):
pass # already in Qx form, will match directly
return ids | extras
def get_brand(name):
lower = name.lower()
for b in ["latitude", "precision", "optiplex", "pro max", "pro"]:
if b in lower:
return b
return None
# ---------------------------------------------------------------------------
# Catalog download + parsing
# ---------------------------------------------------------------------------
def download_and_extract_cab(url, tmpdir):
"""Download a .cab, extract with 7z, return path to XML inside."""
cab = os.path.join(tmpdir, os.path.basename(url))
print(f" Fetching {os.path.basename(url)}...", end=" ", flush=True)
r = subprocess.run(["wget", "-q", "-O", cab, url])
if r.returncode != 0:
print("FAILED"); return None
subprocess.run(["7z", "x", "-y", f"-o{tmpdir}", cab],
capture_output=True, text=True)
os.remove(cab)
xml_name = os.path.basename(url).replace(".cab", ".xml")
xml_path = os.path.join(tmpdir, xml_name)
if os.path.exists(xml_path):
print("OK"); return xml_path
print("FAILED (XML not found)"); return None
def parse_driver_catalog(xml_path, os_filter=None):
"""Parse DriverPackCatalog.xml → list of driver pack dicts.
os_filter: list of OS prefixes to match, e.g. ["Windows10", "Windows11"].
Defaults to ["Windows10", "Windows11"] (both).
"""
if os_filter is None:
os_filter = ["Windows10", "Windows11"]
tree = ET.parse(xml_path)
packs = []
for pkg in tree.getroot().findall(".//d:DriverPackage", NS):
if pkg.get("type") != "win":
continue
os_codes = [o.get("osCode", "") for o in pkg.findall(".//d:OperatingSystem", NS)]
if not any(code.startswith(prefix) for code in os_codes for prefix in os_filter):
continue
models = []
for m in pkg.findall(".//d:Model", NS):
d = m.find("d:Display", NS)
models.append({
"name": m.get("name", ""),
"display": d.text.strip() if d is not None and d.text else ""
})
sha256 = ""
for h in pkg.findall(".//d:Cryptography/d:Hash", NS):
if h.get("algorithm") == "SHA256":
sha256 = h.text; break
path = pkg.get("path", "")
packs.append({
"url": f"{DELL_BASE}/{path}",
"filename": path.split("/")[-1],
"size": int(pkg.get("size", 0)),
"sha256": sha256,
"models": models,
})
return packs
def parse_bios_catalog(xml_path, model_names):
"""Parse DellSDPCatalogPC.xml → list of latest BIOS update dicts for given models."""
tree = ET.parse(xml_path)
root = tree.getroot()
bios = {} # model_key → best entry
for pkg in root.iter(f"{{{SDP_CAT_NS}}}SoftwareDistributionPackage"):
title_elem = pkg.find(f".//{{{SDP_PKG_NS}}}Title")
if title_elem is None or not title_elem.text:
continue
title = title_elem.text
if "BIOS" not in title:
continue
# Find which of our models this BIOS applies to
matched_model = None
for mname in model_names:
for mid in extract_model_ids(mname):
if mid in title:
matched_model = mname
break
if matched_model:
break
if not matched_model:
continue
# Extract version from title (e.g., "...BIOS,1.20.1,1.20.1")
ver_match = re.search(r",(\d+\.\d+\.\d+)", title)
version = ver_match.group(1) if ver_match else "0.0.0"
# Get download URL
origin = pkg.find(f".//{{{SDP_PKG_NS}}}OriginFile")
if origin is None:
continue
entry = {
"title": title,
"version": version,
"filename": origin.get("FileName", ""),
"url": origin.get("OriginUri", ""),
"size": int(origin.get("Size", 0)),
"model": matched_model,
}
# Keep latest version per model
key = matched_model
if key not in bios or version > bios[key]["version"]:
bios[key] = entry
return list(bios.values())
# ---------------------------------------------------------------------------
# Model matching
# ---------------------------------------------------------------------------
def find_dell_packs(our_model_name, dell_packs):
"""Find Dell driver pack(s) matching one of our model names."""
our_ids = extract_model_ids(our_model_name)
our_brand = get_brand(our_model_name)
our_rugged = "rugged" in our_model_name.lower()
if not our_ids:
return []
matches = []
for pack in dell_packs:
for dm in pack["models"]:
dell_ids = extract_model_ids(dm["name"]) | extract_model_ids(dm["display"])
if not (our_ids & dell_ids):
continue
# Brand check: if we specify a brand, Dell must match (or have none)
if our_brand:
dell_brand = get_brand(dm["name"])
if dell_brand and dell_brand != our_brand:
continue
# Rugged check: if Dell explicitly labels pack as Rugged,
# only match our Rugged models (prevents non-rugged 5430 matching
# Rugged 5430 pack). If Dell doesn't say Rugged, allow any match
# (handles 7220/7230 which are Rugged-only but unlabeled in catalog).
dell_rugged = "rugged" in dm["name"].lower() or "rugged" in pack["filename"].lower()
if dell_rugged and not our_rugged:
continue
matches.append(pack)
break
# Deduplicate by URL
seen = set()
return [m for m in matches if m["url"] not in seen and not seen.add(m["url"])]
# ---------------------------------------------------------------------------
# Download + push
# ---------------------------------------------------------------------------
def make_zip_name(filename, dest_dir):
"""Generate a zip filename matching GE convention: win11_<model>_<ver>.zip"""
# Strip extension and version suffix to get base name
base = re.sub(r'[_-]Win1[01][_.].*', '', filename, flags=re.I)
base = re.sub(r'[-_]', '', base).lower()
# Extract version from filename (e.g., A04, A13)
ver_match = re.search(r'_A(\d+)', filename, re.I)
ver = f"a{ver_match.group(1)}" if ver_match else "a00"
return f"win11_{base}_{ver}.zip"
def process_download(args, url, filename, sha256, size, target_dir, label, tmpdir):
"""Download, verify, extract, re-zip, and push one driver pack. Returns True on success.
Each caller should pass a unique tmpdir to avoid collisions in parallel mode."""
local_file = os.path.join(tmpdir, filename)
# Download
print(f" [{label}] Downloading {format_size(size)}...")
r = subprocess.run(["curl", "-L", "-s", "-S",
"--speed-limit", "1000", "--speed-time", "30",
"--retry", "3", "--retry-delay", "5",
"-o", local_file, url])
if r.returncode != 0 or not os.path.exists(local_file):
print(f" [{label}] ERROR: Download failed (curl exit {r.returncode})")
if os.path.exists(local_file): os.remove(local_file)
return False
# Verify hash (if provided)
if sha256:
print(f" [{label}] Verifying SHA256...", end=" ", flush=True)
if not verify_sha256(local_file, sha256):
print("MISMATCH!")
os.remove(local_file)
return False
print("OK")
# Extract locally with 7z (unique subdir per worker)
extract_dir = os.path.join(tmpdir, "extract")
os.makedirs(extract_dir, exist_ok=True)
print(f" [{label}] Extracting...", end=" ", flush=True)
r = subprocess.run(["7z", "x", "-y", f"-o{extract_dir}", local_file],
capture_output=True, text=True)
os.remove(local_file)
if r.returncode != 0:
print(f"FAILED: {r.stderr[:200]}")
subprocess.run(["rm", "-rf", extract_dir])
return False
print("OK")
# Re-zip for PESetup.exe (expects zipped driver packs, not loose files)
zip_name = make_zip_name(filename, target_dir)
zip_path = os.path.join(tmpdir, zip_name)
print(f" [{label}] Zipping as {zip_name}...", end=" ", flush=True)
r = subprocess.run(["zip", "-r", "-q", zip_path, "."],
cwd=extract_dir)
subprocess.run(["rm", "-rf", extract_dir])
if r.returncode != 0:
print("FAILED")
return False
zip_size = os.path.getsize(zip_path)
print(f"OK ({format_size(zip_size)})")
# Push zip to PXE server
print(f" [{label}] Pushing to {target_dir}/{zip_name}...")
ssh_cmd(args.server, f"mkdir -p '{target_dir}'")
r = subprocess.run([
"rsync", "-a",
"-e", f"sshpass -p {PXE_PASS} ssh -o StrictHostKeyChecking=no -o LogLevel=ERROR",
zip_path, f"{PXE_USER}@{args.server}:{target_dir}/"
])
os.remove(zip_path)
if r.returncode != 0:
print(f" [{label}] ERROR: rsync failed")
return False
return True
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description="Download Dell drivers (+ BIOS) and push to PXE server")
parser.add_argument("--list", action="store_true",
help="Preview without downloading")
parser.add_argument("--bios", action="store_true",
help="Also download BIOS updates")
parser.add_argument("--image",
help="Push directly to image type (e.g. gea-standard)")
parser.add_argument("--server", default=PXE_HOST,
help=f"PXE server IP (default: {PXE_HOST})")
parser.add_argument("--force", action="store_true",
help="Re-download even if already on server")
parser.add_argument("--cache-path",
help="Path to local image dir with Deploy/Control/ and Tools/")
parser.add_argument("--local",
help="Download to local directory (no server needed)")
parser.add_argument("--parallel", type=int, default=1, metavar="N",
help="Process N packs concurrently (default: 1)")
args = parser.parse_args()
# --- Load our model selections ---
control_dir = tools_dir = None
if args.cache_path:
p = Path(args.cache_path)
control_dir = p / "Deploy" / "Control"
tools_dir = p / "Tools" if (p / "Tools").is_dir() else p.parent / "Tools"
else:
for d in sorted(REPO_DIR.iterdir()):
if d.is_dir() and (d / "Deploy" / "Control" / "HardwareDriver.json").exists():
control_dir = d / "Deploy" / "Control"
tools_dir = d / "Tools"
break
if not control_dir or not (control_dir / "HardwareDriver.json").exists():
sys.exit("ERROR: HardwareDriver.json not found. Use --cache-path or ensure a local image dir exists.")
if not (tools_dir / "user_selections.json").exists():
sys.exit("ERROR: user_selections.json not found")
with open(control_dir / "HardwareDriver.json") as f:
hw_drivers = json.load(f)
with open(tools_dir / "user_selections.json") as f:
selections = json.load(f)[0]
os_id = selections["OperatingSystemSelection"]
selected_families = set(m["Id"] for m in selections["HardwareModelSelection"])
# Filter to selected + matching OS
our_entries = [d for d in hw_drivers
if d["family"] in selected_families and os_id in d.get("aOsIds", [])]
# Collect unique model names and their DestinationDirs
model_dest_map = {} # model_name → dest_dir
for entry in our_entries:
dest = resolve_dest_dir(entry["DestinationDir"])
for m in entry["models"].split(","):
m = m.strip()
if m not in model_dest_map:
model_dest_map[m] = dest
base_path = f"{IMAGE_BASE}/{args.image}" if args.image else UPLOAD_DEST
print()
print("=" * 60)
print(" Dell Driver Downloader for PXE Server")
print("=" * 60)
# --- Download Dell catalog ---
with tempfile.TemporaryDirectory(prefix="dell-catalog-") as catdir:
xml_path = download_and_extract_cab(DELL_DRIVER_CATALOG, catdir)
if not xml_path:
sys.exit("ERROR: Could not download Dell driver catalog")
dell_packs = parse_driver_catalog(xml_path)
print(f" Catalog: {len(dell_packs)} Win11 driver packs available")
bios_updates = []
if args.bios:
bios_xml = download_and_extract_cab(DELL_BIOS_CATALOG, catdir)
if bios_xml:
bios_updates = parse_bios_catalog(bios_xml, list(model_dest_map.keys()))
print(f" BIOS: {len(bios_updates)} update(s) found")
# --- Match our models to Dell catalog ---
# Group: dest_dir → list of Dell packs to download
download_plan = [] # list of {dell_pack, dest_dir, our_models}
unmatched = []
seen_urls = set()
dest_seen = {} # dest_dir → set of URLs already planned
for model_name, dest_dir in model_dest_map.items():
matches = find_dell_packs(model_name, dell_packs)
if not matches:
unmatched.append(model_name)
continue
for pack in matches:
if pack["url"] in seen_urls:
continue
seen_urls.add(pack["url"])
download_plan.append({
"pack": pack,
"dest_dir": dest_dir,
"model": model_name,
})
# --- Display plan ---
print()
total_drv_size = sum(d["pack"]["size"] for d in download_plan)
print(f" Drivers: {len(download_plan)} pack(s) to download ({format_size(total_drv_size)})")
print(f" Target: {args.server}:{base_path}")
if unmatched:
print(f" No Dell match: {len(unmatched)} model(s)")
print()
for i, d in enumerate(download_plan, 1):
p = d["pack"]
print(f" {i:3}. {d['model']:<38} {format_size(p['size']):>8} {p['filename']}")
print(f" -> {d['dest_dir']}")
if unmatched:
print()
print(f" Unmatched models (not in Dell public catalog):")
for m in unmatched:
print(f" - {m}")
if bios_updates:
total_bios = sum(b["size"] for b in bios_updates)
print()
print(f" BIOS updates: {len(bios_updates)} ({format_size(total_bios)})")
for b in bios_updates:
print(f" {b['model']:<35} v{b['version']} {b['filename']}")
print()
if args.list:
print(" (--list mode, nothing downloaded)")
return
# --- LOCAL MODE: download to local directory ---
if args.local:
local_dir = Path(args.local)
drv_dir = local_dir / "drivers"
bios_local_dir = local_dir / "bios"
drv_dir.mkdir(parents=True, exist_ok=True)
# Load local manifest
manifest_path = local_dir / "manifest.json"
manifest = json.loads(manifest_path.read_text()) if manifest_path.exists() else {}
# Thread-safe counters and manifest access
_lock = threading.Lock()
counters = {"completed": 0, "skipped": 0, "errors": 0}
# Build GE filename mapping from our HardwareDriver.json entries
ge_filename_map = {} # model_name → GE FileName
for entry in our_entries:
fn = entry.get("FileName") or entry.get("fileName", "")
dest = resolve_dest_dir(entry.get("DestinationDir") or entry.get("destinationDir", ""))
for m in (entry.get("models") or entry.get("modelswminame", "")).split(","):
m = m.strip()
if m and fn:
ge_filename_map[m] = {"filename": fn, "dest_dir": dest}
def _download_one_local(i, d):
"""Download a single driver pack (local mode). Thread-safe."""
pack = d["pack"]
tag = f"[{i}/{len(download_plan)}]"
with _lock:
print(f"{'=' * 60}")
print(f"{tag} {d['model']} ({format_size(pack['size'])})")
print(f"{'=' * 60}")
# Check if already downloaded (manifest or file size match)
local_file = drv_dir / pack["filename"]
if not args.force:
with _lock:
existing_hash = manifest.get("drivers", {}).get(pack["url"])
if existing_hash == pack["sha256"]:
with _lock:
print(f"{tag} Already downloaded (hash matches)")
counters["skipped"] += 1
return
if local_file.exists() and local_file.stat().st_size == pack["size"]:
with _lock:
print(f"{tag} Already downloaded (size matches)")
manifest.setdefault("drivers", {})[pack["url"]] = pack["sha256"]
counters["skipped"] += 1
return
# Download raw .exe to drivers/
with _lock:
print(f"{tag} Downloading {format_size(pack['size'])}...")
r = subprocess.run(["curl", "-L", "-s", "-S",
"--speed-limit", "1000", "--speed-time", "30",
"--retry", "3", "--retry-delay", "5",
"-o", str(local_file), pack["url"]])
if r.returncode != 0 or not local_file.exists():
with _lock:
print(f"{tag} ERROR: Download failed (curl exit {r.returncode})")
counters["errors"] += 1
if local_file.exists(): local_file.unlink()
return
# Verify size first
actual_size = local_file.stat().st_size
if pack["size"] and actual_size != pack["size"]:
with _lock:
print(f"{tag} ERROR: Size mismatch (got {format_size(actual_size)}, expected {format_size(pack['size'])})")
counters["errors"] += 1
local_file.unlink()
return
# Verify hash
if pack["sha256"]:
with _lock:
print(f"{tag} Verifying SHA256...", end=" ", flush=True)
if not verify_sha256(str(local_file), pack["sha256"]):
with _lock:
print("MISMATCH!")
counters["errors"] += 1
local_file.unlink()
return
with _lock:
print("OK")
ge_info = ge_filename_map.get(d["model"], {})
with _lock:
counters["completed"] += 1
manifest.setdefault("drivers", {})[pack["url"]] = pack["sha256"]
manifest.setdefault("mapping", {})[pack["filename"]] = {
"model": d["model"],
"dell_filename": pack["filename"],
"ge_filename": ge_info.get("filename", ""),
"dest_dir": d["dest_dir"],
"sha256": pack["sha256"],
"size": pack["size"],
}
print(f"{tag} Done.")
workers = max(1, args.parallel)
if workers > 1:
print(f" Downloading with {workers} parallel workers")
with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as pool:
futures = [pool.submit(_download_one_local, i, d)
for i, d in enumerate(download_plan, 1)]
concurrent.futures.wait(futures)
# --- Download BIOS ---
bios_ok = bios_err = 0
if bios_updates:
bios_local_dir.mkdir(parents=True, exist_ok=True)
print(f"{'=' * 60}")
print(f" BIOS Updates -> {bios_local_dir}")
print(f"{'=' * 60}")
def _download_one_bios(b):
nonlocal bios_ok, bios_err
with _lock:
print(f"\n {b['model']} v{b['version']}")
if not args.force:
with _lock:
existing = manifest.get("bios", {}).get(b["model"])
if existing == b["version"]:
with _lock:
print(f" Already downloaded (v{b['version']})")
return
local_file = bios_local_dir / b["filename"]
with _lock:
print(f" [{b['model']}] Downloading {format_size(b['size'])}...")
r = subprocess.run(["curl", "-L", "-s", "-S",
"--speed-limit", "1000", "--speed-time", "30",
"--retry", "3", "--retry-delay", "5",
"-o", str(local_file), b["url"]])
if r.returncode != 0:
with _lock:
print(f" [{b['model']}] ERROR: Download failed")
bios_err += 1
if local_file.exists(): local_file.unlink()
return
with _lock:
bios_ok += 1
manifest.setdefault("bios", {})[b["model"]] = b["version"]
manifest.setdefault("bios_mapping", {})[b["filename"]] = {
"model": b["model"],
"version": b["version"],
"filename": b["filename"],
}
print(f" [{b['model']}] Done.")
with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as pool:
futures = [pool.submit(_download_one_bios, b) for b in bios_updates]
concurrent.futures.wait(futures)
# Save manifest
manifest_path.write_text(json.dumps(manifest, indent=2))
# --- Summary ---
print()
print(f"{'=' * 60}")
print(f" Summary — Local Download")
print(f"{'=' * 60}")
print(f" Drivers downloaded: {counters['completed']}")
if counters["skipped"]: print(f" Drivers skipped: {counters['skipped']} (already have)")
if counters["errors"]: print(f" Drivers failed: {counters['errors']}")
if bios_updates:
print(f" BIOS downloaded: {bios_ok}")
if bios_err: print(f" BIOS failed: {bios_err}")
print(f" Saved to: {local_dir}")
print(f" Manifest: {manifest_path}")
print()
print(f" To push to server later:")
print(f" python3 download-drivers.py --push-local {local_dir}")
print()
return
# --- REMOTE MODE: download and push to server ---
# --- Verify SSH ---
print(f" Testing SSH to {args.server}...", end=" ", flush=True)
r = ssh_cmd(args.server, "echo OK")
if r.stdout.strip() != "OK":
print("FAILED")
sys.exit(f" Cannot SSH to {PXE_USER}@{args.server}: {r.stderr.strip()}")
print("OK")
print()
# --- Load manifest (tracks what's been downloaded by hash) ---
manifest_path = f"{base_path}/.driver-manifest.json"
r = ssh_cmd(args.server, f"cat '{manifest_path}' 2>/dev/null")
manifest = json.loads(r.stdout) if r.stdout.strip() else {}
# Thread-safe counters and manifest access
_lock = threading.Lock()
counters = {"completed": 0, "skipped": 0, "errors": 0}
# --- Download drivers ---
with tempfile.TemporaryDirectory(prefix="pxe-drivers-") as tmpdir:
def _process_one_remote(i, d):
"""Download, extract, re-zip, and push one driver pack. Thread-safe."""
pack = d["pack"]
target = f"{base_path}/{d['dest_dir']}"
tag = f"[{i}/{len(download_plan)}]"
with _lock:
print(f"{'=' * 60}")
print(f"{tag} {d['model']} ({format_size(pack['size'])})")
print(f"{'=' * 60}")
if not args.force:
with _lock:
existing_hash = manifest.get(d["dest_dir"], {}).get(pack["filename"])
if existing_hash == pack["sha256"]:
with _lock:
print(f"{tag} Up to date (hash matches manifest)")
counters["skipped"] += 1
return
# Each worker gets its own temp subdirectory
worker_tmp = os.path.join(tmpdir, f"worker-{i}")
os.makedirs(worker_tmp, exist_ok=True)
ok = process_download(args, pack["url"], pack["filename"],
pack["sha256"], pack["size"], target,
d["model"], worker_tmp)
with _lock:
if ok:
counters["completed"] += 1
manifest.setdefault(d["dest_dir"], {})[pack["filename"]] = pack["sha256"]
print(f"{tag} Done.")
else:
counters["errors"] += 1
workers = max(1, args.parallel)
if workers > 1:
print(f" Processing with {workers} parallel workers")
with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as pool:
futures = [pool.submit(_process_one_remote, i, d)
for i, d in enumerate(download_plan, 1)]
concurrent.futures.wait(futures)
# --- Download BIOS (goes to enrollment share, shared across all images) ---
bios_ok = bios_err = 0
bios_dir = "/srv/samba/enrollment/BIOS"
if bios_updates:
print(f"{'=' * 60}")
print(f" BIOS Updates -> {bios_dir}")
print(f"{'=' * 60}")
ssh_cmd(args.server, f"mkdir -p '{bios_dir}'")
models_txt = [] # lines for models.txt manifest
def _process_one_bios(b):
nonlocal bios_ok, bios_err
target = f"{bios_dir}/{b['filename']}"
with _lock:
print(f"\n {b['model']} v{b['version']}")
if not args.force:
with _lock:
existing = manifest.get("BIOS", {}).get(b["model"])
if existing == b["version"]:
with _lock:
print(f" Up to date (v{b['version']})")
models_txt.append(f"{b['model']}|{b['filename']}")
return
# BIOS .exe goes as-is (not extracted)
bios_tmp = os.path.join(tmpdir, f"bios-{b['filename']}")
with _lock:
print(f" [{b['model']}] Downloading {format_size(b['size'])}...")
r = subprocess.run(["curl", "-L", "-s", "-S",
"--speed-limit", "1000", "--speed-time", "30",
"--retry", "3", "--retry-delay", "5",
"-o", bios_tmp, b["url"]])
if r.returncode != 0:
with _lock:
print(f" [{b['model']}] ERROR: Download failed")
bios_err += 1
if os.path.exists(bios_tmp): os.remove(bios_tmp)
return
r = subprocess.run([
"rsync", "-a",
"-e", f"sshpass -p {PXE_PASS} ssh -o StrictHostKeyChecking=no -o LogLevel=ERROR",
bios_tmp, f"{PXE_USER}@{args.server}:{target}"
])
os.remove(bios_tmp)
if r.returncode != 0:
with _lock:
print(f" [{b['model']}] ERROR: Push failed")
bios_err += 1
else:
with _lock:
print(f" [{b['model']}] Done.")
bios_ok += 1
manifest.setdefault("BIOS", {})[b["model"]] = b["version"]
models_txt.append(f"{b['model']}|{b['filename']}")
with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as pool:
futures = [pool.submit(_process_one_bios, b) for b in bios_updates]
concurrent.futures.wait(futures)
# Generate models.txt for check-bios.cmd
if models_txt:
manifest_content = "# ModelSubstring|BIOSFile\\n" + "\\n".join(models_txt) + "\\n"
ssh_cmd(args.server,
f"printf '{manifest_content}' > '{bios_dir}/models.txt'")
print(f"\n models.txt updated ({len(models_txt)} entries)")
# --- Save manifest ---
completed, skipped, errors = counters["completed"], counters["skipped"], counters["errors"]
if completed > 0 or bios_ok > 0:
manifest_json = json.dumps(manifest, indent=2)
ssh_cmd(args.server,
f"cat > '{manifest_path}' << 'MANIFEST_EOF'\n{manifest_json}\nMANIFEST_EOF")
print(f" Manifest saved to {manifest_path}")
# --- Summary ---
print()
print(f"{'=' * 60}")
print(f" Summary")
print(f"{'=' * 60}")
print(f" Drivers downloaded: {completed}")
if skipped: print(f" Drivers skipped: {skipped} (up to date)")
if errors: print(f" Drivers failed: {errors}")
if bios_updates:
print(f" BIOS downloaded: {bios_ok}")
if bios_err: print(f" BIOS failed: {bios_err}")
print()
if completed > 0 and not args.image:
print(f" Drivers staged in {base_path}/Deploy/Out-of-box Drivers/")
print(f" Use the webapp (http://{args.server}:9009) to import,")
print(f" or re-run with --image <type> to push directly.")
print()
if __name__ == "__main__":
main()

144
scripts/download-packages.sh Executable file
View File

@@ -0,0 +1,144 @@
#!/bin/bash
#
# download-packages.sh - Download all .deb packages needed for offline PXE server setup
#
# The PXE server installs Ubuntu 24.04 (Noble), so all packages MUST come from the
# 24.04 archive. If this script is run on a non-24.04 host (e.g. Zorin 17 / 22.04),
# it auto-spawns an Ubuntu 24.04 docker container to do the download.
#
# Usage:
# ./download-packages.sh [output_directory]
#
# Default output: ./offline-packages/
set -euo pipefail
OUT_DIR="${1:-./offline-packages}"
OUT_DIR_ABS="$(cd "$(dirname "$OUT_DIR")" 2>/dev/null && pwd)/$(basename "$OUT_DIR")"
# Detect host Ubuntu codename. Run inside the container if not Noble (24.04).
HOST_CODENAME="$(. /etc/os-release && echo "${UBUNTU_CODENAME:-${VERSION_CODENAME:-}}")"
if [ "${IN_DOCKER:-}" != "1" ] && [ "$HOST_CODENAME" != "noble" ]; then
echo "Host is '$HOST_CODENAME', not 'noble' (Ubuntu 24.04)."
echo "Re-running inside ubuntu:24.04 docker container..."
echo ""
if ! command -v docker >/dev/null; then
echo "ERROR: docker not installed. Install docker or run on a real Ubuntu 24.04 host."
exit 1
fi
SCRIPT_PATH="$(readlink -f "$0")"
REPO_DIR="$(cd "$(dirname "$SCRIPT_PATH")"/.. && pwd)"
mkdir -p "$OUT_DIR_ABS"
docker run --rm -i \
-v "$REPO_DIR:/repo" \
-v "$OUT_DIR_ABS:/out" \
-e IN_DOCKER=1 \
-w /repo \
ubuntu:24.04 \
bash -c "apt-get update -qq && apt-get install -y --no-install-recommends sudo python3-pip python3-setuptools python3-wheel ca-certificates >/dev/null && /repo/scripts/download-packages.sh /out"
echo ""
echo "============================================"
echo "Container build complete. Files in: $OUT_DIR_ABS"
echo "============================================"
exit 0
fi
mkdir -p "$OUT_DIR"
# Packages installed by the Ansible playbook (pxe_server_setup.yml)
PLAYBOOK_PACKAGES=(
ansible
dnsmasq
apache2
samba
unzip
ufw
cron
wimtools
p7zip-full
grub-efi-amd64-bin
grub-common
conntrack
busybox-static
zstd
cpio
)
# Packages installed during autoinstall late-commands (NetworkManager, WiFi, etc.)
AUTOINSTALL_PACKAGES=(
network-manager
wpasupplicant
wireless-tools
linux-firmware
firmware-sof-signed
)
ALL_PACKAGES=("${PLAYBOOK_PACKAGES[@]}" "${AUTOINSTALL_PACKAGES[@]}")
echo "============================================"
echo "Offline Package Downloader (Ubuntu 24.04 noble)"
echo "============================================"
echo "Output directory: $OUT_DIR"
echo ""
echo "Packages to resolve:"
printf ' - %s\n' "${ALL_PACKAGES[@]}"
echo ""
# Update package cache
echo "[1/4] Updating package cache..."
sudo apt-get update -qq
# Simulate install to find all dependencies
echo "[2/4] Resolving dependencies..."
EXPLICIT_DEPS=$(apt-get install --simulate "${ALL_PACKAGES[@]}" 2>&1 \
| grep "^Inst " \
| awk '{print $2}')
# ALSO pull every package that would upgrade in a dist-upgrade. This is
# critical: the Ubuntu ISO ships a point-in-time baseline, but our explicit
# packages (from noble-updates) may depend on *newer* versions of ISO-baseline
# packages (e.g. gnupg 17.4 needs matching gpgv 17.4). Without this, offline
# install fails with dpkg "dependency problems" because transitive version
# bumps aren't captured by --simulate on the explicit list.
UPGRADE_DEPS=$(apt-get dist-upgrade --simulate 2>&1 \
| grep "^Inst " \
| awk '{print $2}')
DEPS=$(printf '%s\n%s\n' "$EXPLICIT_DEPS" "$UPGRADE_DEPS" | sort -u | grep -v '^$')
DEP_COUNT=$(echo "$DEPS" | wc -l)
echo " Found $DEP_COUNT packages (explicit + baseline upgrades)"
# Download all packages
echo "[3/4] Downloading .deb packages to $OUT_DIR..."
cd "$OUT_DIR"
apt-get download $DEPS 2>&1 | tail -5
DEB_COUNT=$(ls -1 *.deb 2>/dev/null | wc -l)
TOTAL_SIZE=$(du -sh . | cut -f1)
echo " $DEB_COUNT packages ($TOTAL_SIZE)"
# Download pip wheels for Flask webapp (offline install)
echo "[4/4] Downloading Python wheels for webapp..."
# Place pip-wheels next to the script (or /repo when in docker), not next to OUT_DIR
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd)"
PIP_DIR="$REPO_ROOT/pip-wheels"
mkdir -p "$PIP_DIR"
pip3 download -d "$PIP_DIR" flask lxml 2>&1 | tail -5
WHL_COUNT=$(ls -1 "$PIP_DIR"/*.whl "$PIP_DIR"/*.tar.gz 2>/dev/null | wc -l)
echo " $WHL_COUNT Python packages downloaded to pip-wheels/"
echo ""
echo "============================================"
echo "Download complete!"
echo "============================================"
echo " .deb packages: $DEB_COUNT ($TOTAL_SIZE) in $OUT_DIR/"
echo " Python wheels: $WHL_COUNT in $PIP_DIR/"
echo ""

330
scripts/prepare-boot-tools.sh Executable file
View File

@@ -0,0 +1,330 @@
#!/bin/bash
#
# prepare-boot-tools.sh — Download/extract boot files for PXE boot tools
#
# Downloads Clonezilla Live and Memtest86+ for PXE booting,
# and extracts Blancco Drive Eraser from its ISO.
#
# Usage:
# ./prepare-boot-tools.sh [/path/to/blancco.iso]
#
# Output directories:
# boot-tools/clonezilla/ — vmlinuz, initrd.img, filesystem.squashfs
# boot-tools/blancco/ — extracted boot files or ISO for memdisk
# boot-tools/memtest/ — memtest.efi
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
OUT_DIR="$REPO_ROOT/boot-tools"
BLANCCO_ISO="${1:-}"
# Auto-detect Blancco ISO in project directory
if [ -z "$BLANCCO_ISO" ]; then
BLANCCO_ISO=$(find "$REPO_ROOT" -maxdepth 1 -name '*DriveEraser*.iso' -o -name '*blancco*.iso' 2>/dev/null | head -1)
fi
mkdir -p "$OUT_DIR"/{clonezilla,blancco,memtest}
echo "============================================"
echo "PXE Boot Tools Preparation"
echo "============================================"
# --- Clonezilla Live ---
echo ""
echo "[1/3] Clonezilla Live"
CLONEZILLA_VERSION="3.2.1-6"
CLONEZILLA_FILE="clonezilla-live-${CLONEZILLA_VERSION}-amd64.zip"
CLONEZILLA_URL="https://sourceforge.net/projects/clonezilla/files/clonezilla_live_stable/${CLONEZILLA_VERSION}/${CLONEZILLA_FILE}/download"
if [ -f "$OUT_DIR/clonezilla/vmlinuz" ] && [ -f "$OUT_DIR/clonezilla/filesystem.squashfs" ]; then
echo " Already prepared, skipping. Delete boot-tools/clonezilla/ to re-download."
else
echo " Downloading Clonezilla Live ${CLONEZILLA_VERSION}..."
TMPDIR=$(mktemp -d)
wget -q --show-progress -O "$TMPDIR/$CLONEZILLA_FILE" "$CLONEZILLA_URL" || {
echo " ERROR: Download failed. Trying alternative URL..."
# Fallback: try OSDN mirror
wget -q --show-progress -O "$TMPDIR/$CLONEZILLA_FILE" \
"https://free.nchc.org.tw/clonezilla-live/stable/${CLONEZILLA_FILE}" || {
echo " ERROR: Could not download Clonezilla. Download manually and place in boot-tools/clonezilla/"
echo " Need: vmlinuz, initrd.img, filesystem.squashfs from the live ZIP"
}
}
if [ -f "$TMPDIR/$CLONEZILLA_FILE" ]; then
echo " Extracting PXE boot files..."
unzip -o -j "$TMPDIR/$CLONEZILLA_FILE" "live/vmlinuz" -d "$OUT_DIR/clonezilla/"
unzip -o -j "$TMPDIR/$CLONEZILLA_FILE" "live/initrd.img" -d "$OUT_DIR/clonezilla/"
unzip -o -j "$TMPDIR/$CLONEZILLA_FILE" "live/filesystem.squashfs" -d "$OUT_DIR/clonezilla/"
rm -rf "$TMPDIR"
echo " Done."
fi
fi
ls -lh "$OUT_DIR/clonezilla/" 2>/dev/null | grep -E 'vmlinuz|initrd|squashfs' | sed 's/^/ /'
# --- Blancco Drive Eraser ---
echo ""
echo "[2/3] Blancco Drive Eraser"
if [ -n "$BLANCCO_ISO" ] && [ -f "$BLANCCO_ISO" ]; then
echo " Extracting from: $BLANCCO_ISO"
echo " Using 7z to extract (no root required)..."
# Blancco is Arch Linux-based. We need:
# arch/boot/x86_64/vmlinuz-bde-linux
# arch/boot/x86_64/initramfs-bde-linux.img
# arch/boot/intel-ucode.img
# arch/boot/amd-ucode.img
# arch/boot/config.img
# arch/x86_64/airootfs.sfs
TMPDIR=$(mktemp -d)
7z x -o"$TMPDIR" "$BLANCCO_ISO" \
"arch/boot/x86_64/vmlinuz-bde-linux" \
"arch/boot/x86_64/initramfs-bde-linux.img" \
"arch/boot/intel-ucode.img" \
"arch/boot/amd-ucode.img" \
"arch/boot/config.img" \
"arch/x86_64/airootfs.sfs" \
-r 2>/dev/null || {
echo " 7z extraction failed. Install p7zip-full: apt install p7zip-full"
}
# Flatten into blancco/ directory for HTTP serving
if [ -f "$TMPDIR/arch/boot/x86_64/vmlinuz-bde-linux" ]; then
cp "$TMPDIR/arch/boot/x86_64/vmlinuz-bde-linux" "$OUT_DIR/blancco/"
cp "$TMPDIR/arch/boot/x86_64/initramfs-bde-linux.img" "$OUT_DIR/blancco/"
cp "$TMPDIR/arch/boot/intel-ucode.img" "$OUT_DIR/blancco/"
cp "$TMPDIR/arch/boot/amd-ucode.img" "$OUT_DIR/blancco/"
cp "$TMPDIR/arch/boot/config.img" "$OUT_DIR/blancco/"
# airootfs.sfs needs to be in arch/x86_64/ path relative to HTTP root
mkdir -p "$OUT_DIR/blancco/arch/x86_64"
cp "$TMPDIR/arch/x86_64/airootfs.sfs" "$OUT_DIR/blancco/arch/x86_64/"
echo " Extracted Blancco boot files."
# --- Patch config.img (size-preserving) ---
# config.img is a CPIO archive containing preferences.xml (padded to 32768 bytes).
# The CPIO itself must remain exactly 194560 bytes (380 x 512-byte blocks).
# We use Python for byte-level replacement to preserve exact sizes.
if [ -f "$OUT_DIR/blancco/config.img" ]; then
echo " Patching config.img for network report storage..."
CFGTMP=$(mktemp -d)
cd "$CFGTMP"
cpio -id < "$OUT_DIR/blancco/config.img" 2>/dev/null
if [ -f "$CFGTMP/preferences.xml" ]; then
ORIG_SIZE=$(stat -c%s "$CFGTMP/preferences.xml")
python3 << 'PYEOF'
import sys
with open("preferences.xml", "rb") as f:
data = f.read()
orig_size = len(data)
# Set SMB share credentials and path
data = data.replace(
b'<username encrypted="false"></username>',
b'<username encrypted="false">blancco</username>'
)
data = data.replace(
b'<password encrypted="false"></password>',
b'<password encrypted="false">blancco</password>'
)
data = data.replace(
b'<hostname></hostname>',
b'<hostname>10.9.100.1</hostname>'
)
data = data.replace(
b'<path></path>',
b'<path>blancco-reports</path>'
)
# Enable auto-backup
data = data.replace(
b'<auto_backup>false</auto_backup>',
b'<auto_backup>true</auto_backup>'
)
# Enable bootable report
data = data.replace(
b'<bootable_report>\n <enabled>false</enabled>\n </bootable_report>',
b'<bootable_report>\n <enabled>true</enabled>\n </bootable_report>'
)
# Maintain exact file size by trimming trailing padding/whitespace
diff = len(data) - orig_size
if diff > 0:
# The file has trailing whitespace/padding before the final XML closing tags
# Trim from the padding area (spaces before closing comment or end of file)
end_pos = data.rfind(b'<!-- ')
if end_pos > 0:
comment_end = data.find(b' -->', end_pos)
if comment_end > 0:
data = data[:comment_end - diff] + data[comment_end:]
if len(data) > orig_size:
# Fallback: trim trailing whitespace
data = data.rstrip()
data = data + b'\n' * (orig_size - len(data))
elif diff < 0:
# Pad with spaces to maintain size
data = data[:-1] + b' ' * (-diff) + data[-1:]
if len(data) != orig_size:
print(f" WARNING: Size mismatch ({len(data)} vs {orig_size}), padding to match")
if len(data) > orig_size:
data = data[:orig_size]
else:
data = data + b'\x00' * (orig_size - len(data))
with open("preferences.xml", "wb") as f:
f.write(data)
print(f" preferences.xml: {orig_size} bytes (preserved)")
PYEOF
# Repack CPIO with exact 512-byte block alignment (194560 bytes)
ls -1 "$CFGTMP" | (cd "$CFGTMP" && cpio -o -H newc 2>/dev/null) | \
dd bs=512 conv=sync 2>/dev/null > "$OUT_DIR/blancco/config.img"
echo " Reports: SMB blancco@10.9.100.1/blancco-reports, bootable report enabled"
fi
cd "$REPO_ROOT"
rm -rf "$CFGTMP"
fi
# --- Patch initramfs to keep network interfaces up after copytoram ---
# Blancco uses copytoram=y which triggers archiso_pxe_common latehook to
# flush all network interfaces. Binary-patch the check from "y" to "N" so
# the condition never matches. IMPORTANT: full extract/repack BREAKS booting.
echo " Patching initramfs to preserve network after copytoram..."
python3 << PYEOF
import lzma, sys, os
initramfs = "$OUT_DIR/blancco/initramfs-bde-linux.img"
with open(initramfs, "rb") as f:
compressed = f.read()
# Decompress XZ stream
try:
raw = lzma.decompress(compressed)
except lzma.LZMAError:
print(" WARNING: Could not decompress initramfs (not XZ?), skipping patch")
sys.exit(0)
# Binary patch: change the copytoram check from "y" to "N"
old = b'"y" ]; then\n for curif in /sys/class/net'
new = b'"N" ]; then\n for curif in /sys/class/net'
if old not in raw:
# Try alternate pattern with different whitespace
old = b'"\${copytoram}" = "y"'
new = b'"\${copytoram}" = "N"'
if old in raw:
raw = raw.replace(old, new, 1)
# Recompress with same XZ settings as archiso
recompressed = lzma.compress(raw, format=lzma.FORMAT_XZ,
check=lzma.CHECK_CRC32,
preset=6)
with open(initramfs, "wb") as f:
f.write(recompressed)
print(" initramfs patched: copytoram network flush disabled")
else:
print(" WARNING: copytoram pattern not found in initramfs, skipping patch")
PYEOF
# --- Build GRUB EFI binary for Blancco chainload ---
# Broadcom iPXE can't pass initrd to Linux kernels in UEFI mode.
# Solution: iPXE chains to grubx64.efi, which loads kernel+initrd via TFTP.
GRUB_CFG="$REPO_ROOT/boot-tools/blancco/grub-blancco.cfg"
if [ -f "$GRUB_CFG" ]; then
if command -v grub-mkstandalone &>/dev/null; then
echo " Building grubx64.efi (GRUB chainload for Blancco)..."
grub-mkstandalone \
--format=x86_64-efi \
--output="$OUT_DIR/blancco/grubx64.efi" \
--modules="linux normal echo net efinet http tftp chain sleep all_video efi_gop" \
"boot/grub/grub.cfg=$GRUB_CFG" 2>/dev/null
echo " Built grubx64.efi ($(du -h "$OUT_DIR/blancco/grubx64.efi" | cut -f1))"
else
echo " WARNING: grub-mkstandalone not found. Install grub-efi-amd64-bin:"
echo " sudo apt install grub-efi-amd64-bin grub-common"
echo " Then re-run this script to build grubx64.efi"
fi
else
echo " WARNING: grub-blancco.cfg not found at $GRUB_CFG"
fi
else
echo " Could not extract boot files from ISO."
fi
rm -rf "$TMPDIR"
else
echo " No Blancco ISO found. Provide path as argument or place in project directory."
echo " Usage: $0 /path/to/DriveEraser.iso"
fi
ls -lh "$OUT_DIR/blancco/" 2>/dev/null | grep -v '^total' | sed 's/^/ /'
# --- Memtest86+ ---
echo ""
echo "[3/3] Memtest86+"
MEMTEST_VERSION="7.20"
MEMTEST_URL="https://memtest.org/download/${MEMTEST_VERSION}/mt86plus_${MEMTEST_VERSION}.binaries.zip"
if [ -f "$OUT_DIR/memtest/memtest.efi" ]; then
echo " Already prepared, skipping."
else
echo " Downloading Memtest86+ v${MEMTEST_VERSION}..."
TMPDIR=$(mktemp -d)
wget -q --show-progress -O "$TMPDIR/memtest.zip" "$MEMTEST_URL" || {
echo " ERROR: Download failed. Download manually from https://memtest.org"
TMPDIR=""
}
if [ -n "$TMPDIR" ] && [ -f "$TMPDIR/memtest.zip" ]; then
echo " Extracting EFI binary..."
unzip -o -j "$TMPDIR/memtest.zip" "memtest64.efi" -d "$OUT_DIR/memtest/" 2>/dev/null || \
unzip -o -j "$TMPDIR/memtest.zip" "mt86plus_${MEMTEST_VERSION}.x64.efi" -d "$OUT_DIR/memtest/" 2>/dev/null || \
unzip -o "$TMPDIR/memtest.zip" -d "$TMPDIR/extract/"
# Find the EFI file regardless of exact name
EFI_FILE=$(find "$TMPDIR" "$OUT_DIR/memtest" -name '*.efi' -name '*64*' 2>/dev/null | head -1)
if [ -n "$EFI_FILE" ] && [ ! -f "$OUT_DIR/memtest/memtest.efi" ]; then
cp "$EFI_FILE" "$OUT_DIR/memtest/memtest.efi"
fi
rm -rf "$TMPDIR"
echo " Done."
fi
fi
ls -lh "$OUT_DIR/memtest/" 2>/dev/null | grep -v '^total' | sed 's/^/ /'
# --- Summary ---
echo ""
echo "============================================"
echo "Boot tools prepared in: $OUT_DIR/"
echo "============================================"
echo ""
for tool in clonezilla blancco memtest; do
COUNT=$(find "$OUT_DIR/$tool" -type f 2>/dev/null | wc -l)
SIZE=$(du -sh "$OUT_DIR/$tool" 2>/dev/null | cut -f1)
printf " %-15s %s (%d files)\n" "$tool" "$SIZE" "$COUNT"
done
echo ""
echo "These files need to be copied to the PXE server's web root:"
echo " /var/www/html/clonezilla/"
echo " /var/www/html/blancco/"
echo " /var/www/html/memtest/"
echo ""
echo "The build-usb.sh script will include them automatically,"
echo "or copy them manually to the server."
echo ""

21
scripts/pull-bios.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# pull-bios.sh - Pull BIOS update binaries from prod PXE server to bios-staging/
# Run this with the USB NIC plugged in, before building the USB.
set -e
REPO_ROOT="$(cd "$(dirname "$0")"/.. && pwd)"
DEST="$REPO_ROOT/bios-staging"
PXE_SERVER="10.9.100.1"
PXE_USER="pxe"
PXE_PASS="pxe"
mkdir -p "$DEST"
echo "Pulling BIOS binaries from $PXE_SERVER..."
sshpass -p "$PXE_PASS" scp -o StrictHostKeyChecking=no -o ConnectTimeout=10 \
"$PXE_USER@$PXE_SERVER:/srv/samba/enrollment/BIOS/*.exe" "$DEST/"
COUNT=$(find "$DEST" -name '*.exe' | wc -l)
SIZE=$(du -sh "$DEST" | cut -f1)
echo "Done: $COUNT files ($SIZE) in bios-staging/"

View File

@@ -0,0 +1,177 @@
#!/usr/bin/env python3
"""Sync HardwareDriver.json and user_selections.json across all PXE image types.
Reads all HardwareDriver.json files, builds a unified driver catalog,
then updates each image to include all known hardware models.
Run after adding new driver packs to the shared Out-of-box Drivers directory.
"""
import json
import os
import sys
from pathlib import Path
from collections import OrderedDict
WINPEAPPS = Path("/srv/samba/winpeapps")
SHARED_DRIVERS = WINPEAPPS / "_shared" / "Out-of-box Drivers"
def normalize_entry(entry):
"""Normalize a HardwareDriver.json entry to a consistent format."""
norm = {}
norm["manufacturer"] = entry.get("manufacturer", "Dell")
norm["product"] = entry.get("product") or entry.get("manufacturerfriendlyname", "Dell")
norm["family"] = entry.get("family", "")
norm["modelswminame"] = entry.get("modelswminame") or entry.get("models", "")
norm["modelsfriendlyname"] = entry.get("modelsfriendlyname", "")
norm["fileName"] = entry.get("fileName") or entry.get("FileName", "")
norm["destinationDir"] = entry.get("destinationDir") or entry.get("DestinationDir", "")
norm["url"] = entry.get("url", "")
norm["hash"] = entry.get("hash", "")
norm["size"] = entry.get("size", 0)
norm["modifiedDate"] = entry.get("modifiedDate", "0001-01-01T00:00:00")
norm["osId"] = entry.get("osId", "")
norm["imagedisk"] = entry.get("imagedisk", 0)
return norm
def merge_os_ids(a, b):
"""Merge two osId strings (e.g., '18' + '20,21' -> '18,20,21')."""
ids = set()
for oid in [a, b]:
for part in str(oid).split(","):
part = part.strip()
if part:
ids.add(part)
return ",".join(sorted(ids, key=lambda x: int(x) if x.isdigit() else 0))
def check_driver_exists(entry):
"""Check if the driver zip actually exists in the shared directory."""
dest = entry["destinationDir"]
dest = dest.replace("*destinationdir*", "")
dest = dest.lstrip("\\")
dest = dest.replace("\\", "/")
# Strip leading path components that are already in SHARED_DRIVERS
for prefix in ["Deploy/Out-of-box Drivers/", "Out-of-box Drivers/"]:
if dest.startswith(prefix):
dest = dest[len(prefix):]
break
dest = dest.lstrip("/")
zip_path = SHARED_DRIVERS / dest / entry["fileName"]
return zip_path.exists()
def main():
print("=== PXE Hardware Model Sync ===")
print()
# Step 1: Build unified catalog from all images
print("Reading driver catalogs...")
catalog = OrderedDict()
image_dirs = sorted(
[d for d in WINPEAPPS.iterdir() if d.is_dir() and not d.name.startswith("_")]
)
for img_dir in image_dirs:
hw_file = img_dir / "Deploy" / "Control" / "HardwareDriver.json"
if not hw_file.exists():
continue
with open(hw_file) as f:
entries = json.load(f)
print(" Read {} entries from {}".format(len(entries), img_dir.name))
for entry in entries:
norm = normalize_entry(entry)
key = (norm["family"], norm["fileName"])
if key in catalog:
catalog[key]["osId"] = merge_os_ids(
catalog[key]["osId"], norm["osId"]
)
# Prefer longer/more complete model names
if len(norm["modelswminame"]) > len(catalog[key]["modelswminame"]):
catalog[key]["modelswminame"] = norm["modelswminame"]
if len(norm["modelsfriendlyname"]) > len(
catalog[key]["modelsfriendlyname"]
):
catalog[key]["modelsfriendlyname"] = norm["modelsfriendlyname"]
else:
catalog[key] = norm
unified = list(catalog.values())
print()
print("Unified catalog: {} unique driver entries".format(len(unified)))
# Step 2: Check which drivers actually exist on disk
missing = []
found = 0
for entry in unified:
if check_driver_exists(entry):
found += 1
else:
missing.append(
" {}: {}".format(entry["family"], entry["fileName"])
)
print(" {} drivers found on disk".format(found))
if missing:
print(" WARNING: {} driver zips NOT found on disk:".format(len(missing)))
for m in missing[:15]:
print(m)
if len(missing) > 15:
print(" ... and {} more".format(len(missing) - 15))
print(" (Entries still included - PESetup may download them)")
# Step 3: Build unified model selection from all driver entries
models = []
seen = set()
for entry in unified:
friendly_names = [
n.strip()
for n in entry["modelsfriendlyname"].split(",")
if n.strip()
]
family = entry["family"]
for name in friendly_names:
key = (name, family)
if key not in seen:
seen.add(key)
models.append({"Model": name, "Id": family})
models.sort(key=lambda x: x["Model"])
print()
print("Unified model selection: {} models".format(len(models)))
# Step 4: Update each image
print()
print("Updating images...")
for img_dir in image_dirs:
hw_file = img_dir / "Deploy" / "Control" / "HardwareDriver.json"
us_file = img_dir / "Tools" / "user_selections.json"
if not hw_file.exists() or not us_file.exists():
continue
# Write unified HardwareDriver.json
with open(hw_file, "w") as f:
json.dump(unified, f, indent=2)
f.write("\n")
# Update user_selections.json (preserve OperatingSystemSelection etc.)
with open(us_file) as f:
user_sel = json.load(f)
old_count = len(user_sel[0].get("HardwareModelSelection", []))
user_sel[0]["HardwareModelSelection"] = models
with open(us_file, "w") as f:
json.dump(user_sel, f, indent=2)
f.write("\n")
print(
" {}: {} -> {} models, {} driver entries".format(
img_dir.name, old_count, len(models), len(unified)
)
)
print()
print("Done!")
if __name__ == "__main__":
main()