Add buildings and trees export pipelines
This commit is contained in:
@@ -32,6 +32,12 @@
|
||||
- No automated tests yet; rely on manual validation: run exports on a small tile subset, open outputs in `gdalinfo` or GIS viewer, and confirm `tile_index.csv` aligns with Unity expectations.
|
||||
- When changing scaling or resolution, compare before/after stats (min/max) and spot-check terrain in Unity.
|
||||
|
||||
## Current Pipelines (v1 WIP)
|
||||
- **Heightmaps/Orthos**: unchanged; see README.
|
||||
- **Buildings (new)**: `--export buildings` converts LoD2 CityGML → CityJSON (citygml-tools), triangulates (cjio), rebases to tile-local XY, merges per tile into one GLB (1 mesh, roof/wall primitives), decimates to budget, planar-UV roofs with embedded DOP20 tile texture (unlit), walls colored from ortho fallback, axes glTF-friendly (x=east, y=height, z=-north). Open: better wall coloring (BDOM/LPO), stronger simplification, footprint-aware ground snap (currently clamp to DTM).
|
||||
- **Trees (new)**: `--export trees` uses DOM1–DGM1 CHM + CityGML building mask (buffered) to find trees, roughness/confidence heuristic with optional LPO/BDOM boost, caps by count. Outputs per-tile CSV and chunked GLBs (4×4 by default) built from 16 procedural proxies; instancing toggle in config; shared proxy library emitted.
|
||||
- **Tools**: expects `citygml-tools-2.4.0/citygml-tools` and `cjio` on PATH (override `CJIO` env). Orthos must exist for roofs/walls.
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
- Commit messages are short, imperative summaries (e.g., "Ignore generated orthophotos"). Group related changes per commit; commit `uv.lock` when dependency versions change.
|
||||
- Before opening a PR: describe the change, list commands run, note data locations (not committed), and include any screenshots from Unity/GIS if visuals changed.
|
||||
|
||||
78
README.md
78
README.md
@@ -32,6 +32,8 @@ This repository converts DGM1 elevation tiles into Unity-ready 16-bit PNG height
|
||||
uv run python geodata_to_unity.py --export all
|
||||
# heightmaps only: uv run python geodata_to_unity.py --export heightmap
|
||||
# textures only: uv run python geodata_to_unity.py --export textures
|
||||
# buildings: uv run python geodata_to_unity.py --export buildings
|
||||
# trees (CSV+GLB): uv run python geodata_to_unity.py --export trees
|
||||
```
|
||||
4. Import the PNGs into Unity Terrains using `tile_index.csv` for placement and consistent height scaling (0–65535).
|
||||
|
||||
@@ -68,72 +70,16 @@ This repository converts DGM1 elevation tiles into Unity-ready 16-bit PNG height
|
||||
- The download script relies on a Linux/OpenSSL toolchain with system CA bundle at `/etc/ssl/certs/ca-certificates.crt`; it builds a trust chain by fetching the geobasis intermediate. macOS/Windows users should either provide a combined CA via `CURL_CA_BUNDLE` or download with a browser/wget and place files manually.
|
||||
- Place companion `.j2w` and `.xml` files under `raw/dop20/j2w/` and `raw/dop20/meta/` if available; they are not required for the VRT but help provenance.
|
||||
|
||||
### Buildings
|
||||
LoD2 CityGML tiles can be converted to GLB per tile while preserving roof/wall semantics:
|
||||
|
||||
1. CityGML → CityJSON:
|
||||
```bash
|
||||
for f in raw/citygml/lod2/*.gml; do
|
||||
base="$(basename "$f" .gml)"
|
||||
tools/citygml-tools-2.4.0/citygml-tools to-cityjson "$f" \
|
||||
-o "work/cityjson/${base}.city.json"
|
||||
done
|
||||
```
|
||||
If citygml-tools outputs directories named `<tile>.city.json/`, they contain `<tile>.json` inside.
|
||||
If cjio later reports out-of-range vertex indices during triangulation, clean first:
|
||||
```bash
|
||||
mkdir -p work/cityjson_clean
|
||||
uv run python scripts/clean_cityjson_vertices.py --input-dir work/cityjson --output-dir work/cityjson_clean
|
||||
```
|
||||
2. Clean + triangulate:
|
||||
```bash
|
||||
mkdir -p work/cityjson_tri
|
||||
for f in work/cityjson_clean/*.city.json; do
|
||||
base="$(basename "$f" .city.json)"
|
||||
input_json="$f/$base.json" # citygml-tools writes a .json inside the .city.json folder
|
||||
uv run cjio --ignore_duplicate_keys "$input_json" upgrade triangulate vertices_clean \
|
||||
save "work/cityjson_tri/${base}.tri.city.json"
|
||||
done
|
||||
```
|
||||
If cjio still complains about duplicate object IDs, inspect the source GML; `--ignore_duplicate_keys` tells cjio to keep the last occurrence.
|
||||
3. Shift coordinates to tile-local XY using the heightmap manifest (Z stays absolute):
|
||||
```bash
|
||||
uv run python scripts/rebase_cityjson_tiles.py \
|
||||
--input-dir work/cityjson_tri \
|
||||
--output-dir work/cityjson_tri_local \
|
||||
--tile-index export_unity/tile_index.csv
|
||||
```
|
||||
If your `tile_index.csv` lives elsewhere, override `--tile-index`. The script uses the numeric tile suffix (e.g., `32_328_5511`) to match LoD2 tiles to the DGM1 manifest.
|
||||
4. Split roof/wall semantics without reindexing vertices on the tile-local set:
|
||||
```bash
|
||||
uv run python scripts/split_semantics.py --input-dir work/cityjson_tri_local --output-dir work/cityjson_split_local
|
||||
```
|
||||
5. Export GLBs (tile-local coords; place with `export_unity/tile_index.csv`):
|
||||
```bash
|
||||
for f in work/cityjson_tri_local/*.tri.city.json; do
|
||||
base="$(basename "$f" .tri.city.json)"
|
||||
uv run cjio "$f" export glb "export_unity/buildings_glb/${base}.glb"
|
||||
done
|
||||
for f in work/cityjson_split_local/*.roof.city.json; do
|
||||
base="$(basename "$f" .roof.city.json)"
|
||||
uv run cjio "$f" export glb "export_unity/buildings_glb_split/${base}_roof.glb"
|
||||
done
|
||||
for f in work/cityjson_split_local/*.wall.city.json; do
|
||||
base="$(basename "$f" .wall.city.json)"
|
||||
uv run cjio "$f" export glb "export_unity/buildings_glb_split/${base}_wall.glb"
|
||||
done
|
||||
```
|
||||
6. Confirm intermediates/GLBs exist (override dirs if using `_local` outputs):
|
||||
```bash
|
||||
uv run python scripts/verify_pipeline.py --mode both \
|
||||
--tri-dir work/cityjson_tri_local \
|
||||
--split-dir work/cityjson_split_local
|
||||
```
|
||||
|
||||
One-shot test run (assumes `tile_index.csv` is already present from the heightmap export):
|
||||
```bash
|
||||
bash scripts/run_citygml_to_glb.sh
|
||||
```
|
||||
### Buildings (automated exporter)
|
||||
- Run: `uv run python geodata_to_unity.py --export buildings`
|
||||
- What it does per tile:
|
||||
- Converts LoD2 CityGML → CityJSON (citygml-tools), triangulates with cjio, rebases to tile-local XY using `tile_index.csv`.
|
||||
- Merges all buildings into one GLB (1 mesh with roof/wall primitives), decimates to the configured triangle budget.
|
||||
- Roofs: planar UVs from tile-local XY, embedded per-tile DOP20 orthophoto as base color (unlit by default).
|
||||
- Walls: vertex colors sampled from the ortho as a fallback (neutral otherwise).
|
||||
- Coordinates: glTF-friendly (x=east, y=height, z=-north) so glTFast instantiates one GameObject with two submeshes.
|
||||
- Requirements: LoD2 GMLs under `raw/citygml/lod2/`, per-tile orthos in `export_unity/ortho_jpg/`, tools on PATH (`tools/citygml-tools-2.4.0/citygml-tools`, `cjio` or override via `CJIO` env).
|
||||
- Open items: richer wall coloring from BDOM/LPO facades, better simplification, footprint-aware ground snapping beyond the current clamp-to-ground.
|
||||
|
||||
### Troubleshooting
|
||||
- Empty raw directories cause VRT creation to fail fast (`No sources available to build VRT`); populate inputs or adjust `--raw-*` overrides.
|
||||
|
||||
@@ -30,5 +30,26 @@
|
||||
"ortho": {
|
||||
"out_res": 2048,
|
||||
"jpeg_quality": 90
|
||||
},
|
||||
"buildings": {
|
||||
"out_dir": "export_unity/buildings_tiles",
|
||||
"work_cityjson_dir": "work/cityjson_lod2",
|
||||
"work_rebased_dir": "work/cityjson_lod2_local",
|
||||
"work_glb_dir": "work/buildings_glb_tmp",
|
||||
"triangle_budget_min": 200000,
|
||||
"triangle_budget_max": 350000,
|
||||
"roof_unlit": true
|
||||
},
|
||||
"trees": {
|
||||
"csv_dir": "export_unity/trees",
|
||||
"glb_dir": "export_unity/trees_tiles",
|
||||
"proxy_library": "export_unity/tree_proxies.glb",
|
||||
"grid_res_m": 2.0,
|
||||
"min_height_m": 2.0,
|
||||
"max_trees": 5000,
|
||||
"chunk_grid": 4,
|
||||
"proxy_variants": 16,
|
||||
"proxy_min_tris": 120,
|
||||
"proxy_max_tris": 180
|
||||
}
|
||||
}
|
||||
|
||||
487
geodata_pipeline/buildings.py
Normal file
487
geodata_pipeline/buildings.py
Normal file
@@ -0,0 +1,487 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import math
|
||||
import os
|
||||
import shlex
|
||||
import struct
|
||||
import subprocess
|
||||
from typing import Dict, Iterable, List, Tuple
|
||||
|
||||
import numpy as np
|
||||
from osgeo import gdal
|
||||
|
||||
from .config import Config
|
||||
from .gdal_utils import ensure_dir
|
||||
|
||||
CITYGML_TOOLS = os.environ.get("CITYGML_TOOLS", "tools/citygml-tools-2.4.0/citygml-tools")
|
||||
# Tooling defaults; can be overridden via env vars if needed.
|
||||
# Default cjio command assumes we are already running inside the project venv.
|
||||
# If you invoke the exporter from outside the venv, set CJIO="uv run cjio".
|
||||
CJIO_CMD = os.environ.get("CJIO", "cjio")
|
||||
|
||||
|
||||
def _pad4(data: bytes, pad_byte: bytes = b"\x20") -> bytes:
|
||||
pad_len = (4 - (len(data) % 4)) % 4
|
||||
return data + pad_byte * pad_len
|
||||
|
||||
|
||||
def _tile_suffix(tile_id: str) -> str:
|
||||
parts = tile_id.split("_")
|
||||
return "_".join(parts[-3:]) if len(parts) >= 3 else tile_id
|
||||
|
||||
|
||||
def _load_cityjson(path: str) -> dict | None:
|
||||
if not os.path.exists(path):
|
||||
return None
|
||||
with open(path, "r", encoding="utf-8") as handle:
|
||||
return json.load(handle)
|
||||
|
||||
|
||||
def _flatten_faces(boundaries, values, surfaces, current_type: str | None, faces: List[Tuple[List[int], str]]) -> None:
|
||||
"""Recursively flatten boundaries with semantics values into faces."""
|
||||
if isinstance(values, list) and isinstance(boundaries, list):
|
||||
for b, v in zip(boundaries, values):
|
||||
_flatten_faces(b, v, surfaces, current_type, faces)
|
||||
return
|
||||
|
||||
sem_type = current_type
|
||||
if isinstance(values, int):
|
||||
if surfaces and 0 <= values < len(surfaces):
|
||||
sem_type = surfaces[values].get("type") or current_type
|
||||
# boundaries at this point should be list of indices or list of rings
|
||||
if not isinstance(boundaries, list):
|
||||
return
|
||||
# handle polygon ring lists
|
||||
if boundaries and isinstance(boundaries[0], list):
|
||||
# take outer ring only
|
||||
ring = boundaries[0]
|
||||
else:
|
||||
ring = boundaries
|
||||
if len(ring) < 3:
|
||||
return
|
||||
# simple fan triangulation
|
||||
for i in range(1, len(ring) - 1):
|
||||
faces.append(([ring[0], ring[i], ring[i + 1]], sem_type or "WallSurface"))
|
||||
|
||||
|
||||
def _collect_faces(cityjson: dict) -> Tuple[List[List[float]], List[Tuple[List[int], str]]]:
|
||||
vertices = cityjson.get("vertices") or []
|
||||
faces: list[Tuple[List[int], str]] = []
|
||||
cityobjects = cityjson.get("CityObjects") or {}
|
||||
for obj in cityobjects.values():
|
||||
geometries = obj.get("geometry") or []
|
||||
for geom in geometries:
|
||||
boundaries = geom.get("boundaries")
|
||||
semantics = geom.get("semantics") or {}
|
||||
surfaces = semantics.get("surfaces") or []
|
||||
values = semantics.get("values")
|
||||
if boundaries is None:
|
||||
continue
|
||||
_flatten_faces(boundaries, values, surfaces, None, faces)
|
||||
return vertices, faces
|
||||
|
||||
|
||||
def _split_roof_wall(faces: List[Tuple[List[int], str]]) -> Tuple[List[List[int]], List[List[int]]]:
|
||||
roofs: list[List[int]] = []
|
||||
walls: list[List[int]] = []
|
||||
for face, sem_type in faces:
|
||||
target = roofs if (sem_type and "roof" in sem_type.lower()) else walls
|
||||
target.append(face)
|
||||
return roofs, walls
|
||||
|
||||
|
||||
def _compute_normals(vertices: np.ndarray, faces: np.ndarray) -> np.ndarray:
|
||||
normals = np.zeros_like(vertices)
|
||||
if faces.size == 0:
|
||||
return normals
|
||||
tri = vertices[faces]
|
||||
v1 = tri[:, 1] - tri[:, 0]
|
||||
v2 = tri[:, 2] - tri[:, 0]
|
||||
face_normals = np.cross(v1, v2)
|
||||
lens = np.linalg.norm(face_normals, axis=1)
|
||||
lens[lens == 0] = 1.0
|
||||
face_normals = face_normals / lens[:, None]
|
||||
for i, fn in enumerate(face_normals):
|
||||
for vi in faces[i]:
|
||||
normals[vi] += fn
|
||||
lens_v = np.linalg.norm(normals, axis=1)
|
||||
lens_v[lens_v == 0] = 1.0
|
||||
normals /= lens_v[:, None]
|
||||
return normals
|
||||
|
||||
|
||||
def _decimate(faces: np.ndarray, budget: int, seed: int = 42) -> np.ndarray:
|
||||
if len(faces) <= budget:
|
||||
return faces
|
||||
rng = np.random.default_rng(seed)
|
||||
idx = rng.choice(len(faces), size=budget, replace=False)
|
||||
return faces[idx]
|
||||
|
||||
|
||||
def _build_materials(embed_roof: bytes | None, unlit_roof: bool) -> Tuple[List[dict], List[str]]:
|
||||
materials = []
|
||||
extensions_used: list[str] = []
|
||||
roof_mat = {
|
||||
"name": "Roof",
|
||||
"pbrMetallicRoughness": {"metallicFactor": 0.0, "roughnessFactor": 1.0},
|
||||
}
|
||||
if embed_roof is not None:
|
||||
roof_mat["pbrMetallicRoughness"]["baseColorTexture"] = {"index": 0}
|
||||
if unlit_roof:
|
||||
roof_mat.setdefault("extensions", {})["KHR_materials_unlit"] = {}
|
||||
extensions_used.append("KHR_materials_unlit")
|
||||
materials.append(roof_mat)
|
||||
wall_mat = {
|
||||
"name": "Wall",
|
||||
"pbrMetallicRoughness": {
|
||||
"baseColorFactor": [0.75, 0.75, 0.75, 1.0],
|
||||
"metallicFactor": 0.0,
|
||||
"roughnessFactor": 1.0,
|
||||
},
|
||||
}
|
||||
materials.append(wall_mat)
|
||||
return materials, extensions_used
|
||||
|
||||
|
||||
def _compose_glb(
|
||||
roof_vertices: np.ndarray,
|
||||
roof_normals: np.ndarray,
|
||||
roof_uv: np.ndarray,
|
||||
roof_faces: np.ndarray,
|
||||
wall_vertices: np.ndarray,
|
||||
wall_normals: np.ndarray,
|
||||
wall_colors: np.ndarray,
|
||||
wall_faces: np.ndarray,
|
||||
embed_roof_tex: bytes | None,
|
||||
unlit_roof: bool,
|
||||
) -> bytes:
|
||||
buffer_views = []
|
||||
accessors = []
|
||||
meshes = []
|
||||
nodes = []
|
||||
bin_data = bytearray()
|
||||
|
||||
def add_view(data: bytes) -> int:
|
||||
offset = int(math.ceil(len(bin_data) / 4.0) * 4)
|
||||
if offset > len(bin_data):
|
||||
bin_data.extend(b"\x00" * (offset - len(bin_data)))
|
||||
bin_data.extend(data)
|
||||
buffer_views.append({"buffer": 0, "byteOffset": offset, "byteLength": len(data)})
|
||||
return len(buffer_views) - 1
|
||||
|
||||
def add_accessor(view_idx: int, count: int, comp: int, type_str: str, min_val=None, max_val=None) -> int:
|
||||
acc = {"bufferView": view_idx, "componentType": comp, "count": count, "type": type_str}
|
||||
if min_val is not None:
|
||||
acc["min"] = min_val
|
||||
if max_val is not None:
|
||||
acc["max"] = max_val
|
||||
accessors.append(acc)
|
||||
return len(accessors) - 1
|
||||
|
||||
materials, extensions_used = _build_materials(embed_roof_tex, unlit_roof)
|
||||
|
||||
def add_primitive(
|
||||
positions: np.ndarray,
|
||||
normals: np.ndarray,
|
||||
faces: np.ndarray,
|
||||
material_idx: int,
|
||||
texcoord: np.ndarray | None = None,
|
||||
colors: np.ndarray | None = None,
|
||||
) -> dict:
|
||||
prim: dict = {}
|
||||
pos_view = add_view(positions.astype(np.float32).tobytes())
|
||||
pos_acc = add_accessor(pos_view, len(positions), 5126, "VEC3", positions.min(axis=0).tolist(), positions.max(axis=0).tolist())
|
||||
nor_view = add_view(normals.astype(np.float32).tobytes())
|
||||
nor_acc = add_accessor(nor_view, len(normals), 5126, "VEC3")
|
||||
idx_view = add_view(faces.astype(np.uint32).reshape(-1).tobytes())
|
||||
idx_acc = add_accessor(idx_view, faces.size, 5125, "SCALAR")
|
||||
attrs = {"POSITION": pos_acc, "NORMAL": nor_acc}
|
||||
if texcoord is not None:
|
||||
uv_view = add_view(texcoord.astype(np.float32).tobytes())
|
||||
uv_acc = add_accessor(uv_view, len(texcoord), 5126, "VEC2")
|
||||
attrs["TEXCOORD_0"] = uv_acc
|
||||
if colors is not None:
|
||||
col_view = add_view(colors.astype(np.float32).tobytes())
|
||||
col_acc = add_accessor(col_view, len(colors), 5126, "VEC3")
|
||||
attrs["COLOR_0"] = col_acc
|
||||
prim["attributes"] = attrs
|
||||
prim["indices"] = idx_acc
|
||||
prim["material"] = material_idx
|
||||
return prim
|
||||
|
||||
if roof_faces.size:
|
||||
meshes.append({"primitives": [add_primitive(roof_vertices, roof_normals, roof_faces, 0, texcoord=roof_uv)]})
|
||||
if wall_faces.size:
|
||||
meshes.append({"primitives": [add_primitive(wall_vertices, wall_normals, wall_faces, 1, colors=wall_colors)]})
|
||||
|
||||
nodes = [{"mesh": i} for i in range(len(meshes))]
|
||||
gltf = {
|
||||
"asset": {"version": "2.0"},
|
||||
"scene": 0,
|
||||
"scenes": [{"nodes": list(range(len(nodes)))}],
|
||||
"nodes": nodes,
|
||||
"meshes": meshes,
|
||||
"materials": materials,
|
||||
}
|
||||
|
||||
# textures
|
||||
if embed_roof_tex is not None:
|
||||
img_view_idx = add_view(embed_roof_tex)
|
||||
accessors # ensure referenced
|
||||
gltf["textures"] = [{"source": 0}]
|
||||
gltf["images"] = [{"bufferView": img_view_idx, "mimeType": "image/jpeg"}]
|
||||
gltf["samplers"] = [{"magFilter": 9729, "minFilter": 9729, "wrapS": 10497, "wrapT": 10497}]
|
||||
|
||||
gltf["buffers"] = [{"byteLength": len(bin_data)}]
|
||||
gltf["bufferViews"] = buffer_views
|
||||
gltf["accessors"] = accessors
|
||||
if extensions_used:
|
||||
gltf["extensionsUsed"] = extensions_used
|
||||
|
||||
json_bytes = json.dumps(gltf, separators=(",", ":")).encode("utf-8")
|
||||
json_padded = _pad4(json_bytes)
|
||||
bin_padded = _pad4(bytes(bin_data))
|
||||
total_len = 12 + 8 + len(json_padded) + 8 + len(bin_padded)
|
||||
header = struct.pack("<4sII", b"glTF", 2, total_len)
|
||||
json_header = struct.pack("<I4s", len(json_padded), b"JSON")
|
||||
bin_header = struct.pack("<I4s", len(bin_padded), b"BIN\x00")
|
||||
return b"".join([header, json_header, json_padded, bin_header, bin_padded])
|
||||
|
||||
|
||||
def _load_ortho(tile_id: str, path: str) -> bytes | None:
|
||||
if not os.path.exists(path):
|
||||
print(f"[buildings] missing ortho for {tile_id}: {path}")
|
||||
return None
|
||||
with open(path, "rb") as handle:
|
||||
return handle.read()
|
||||
|
||||
|
||||
def _bounds_from_row(row: Dict[str, str]) -> Tuple[float, float, float, float]:
|
||||
return float(row["xmin"]), float(row["ymin"]), float(row["xmax"]), float(row["ymax"])
|
||||
|
||||
|
||||
def _run(cmd: list[str], desc: str) -> bool:
|
||||
argv = []
|
||||
for part in cmd:
|
||||
# Allow space-separated env overrides (e.g., "uv run cjio")
|
||||
if " " in part:
|
||||
argv.extend(shlex.split(part))
|
||||
else:
|
||||
argv.append(part)
|
||||
try:
|
||||
res = subprocess.run(argv, check=False)
|
||||
if res.returncode != 0:
|
||||
print(f"[buildings] {desc} failed with code {res.returncode}: {' '.join(argv)}")
|
||||
return False
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
print(f"[buildings] missing tool for {desc}: {' '.join(argv)}")
|
||||
return False
|
||||
|
||||
|
||||
def _rebase_cityjson(path: str, bounds: Tuple[float, float, float, float], out_path: str) -> bool:
|
||||
data = _load_cityjson(path)
|
||||
if not data:
|
||||
return False
|
||||
xmin, ymin, _, _ = bounds
|
||||
verts = data.get("vertices") or []
|
||||
rebased: list[list[float]] = []
|
||||
for v in verts:
|
||||
if len(v) >= 3:
|
||||
rebased.append([v[0] - xmin, v[1] - ymin, v[2]])
|
||||
data["vertices"] = rebased
|
||||
# Update geographicalExtent if present
|
||||
xs = [v[0] for v in rebased]
|
||||
ys = [v[1] for v in rebased]
|
||||
zs = [v[2] for v in rebased]
|
||||
if xs and ys and zs:
|
||||
extent = [min(xs), min(ys), min(zs), max(xs), max(ys), max(zs)]
|
||||
data.setdefault("metadata", {})["geographicalExtent"] = extent
|
||||
os.makedirs(os.path.dirname(out_path), exist_ok=True)
|
||||
with open(out_path, "w", encoding="utf-8") as handle:
|
||||
json.dump(data, handle, ensure_ascii=True, indent=2)
|
||||
return True
|
||||
|
||||
|
||||
def _ensure_cityjson_for_tile(tile_id: str, bounds: Tuple[float, float, float, float], cfg: Config) -> str | None:
|
||||
"""Create CityJSON -> triangulated -> rebased file if missing. Returns rebased path."""
|
||||
suffix = _tile_suffix(tile_id)
|
||||
gml_path = os.path.join(cfg.raw.citygml_lod2_dir, f"LoD2_{suffix}.gml")
|
||||
if not os.path.exists(gml_path):
|
||||
print(f"[buildings] missing GML for {tile_id}: {gml_path}")
|
||||
return None
|
||||
|
||||
def resolve_cityjson(path: str) -> str | None:
|
||||
if os.path.isfile(path):
|
||||
return path
|
||||
if os.path.isdir(path):
|
||||
candidate = os.path.join(path, f"{os.path.basename(path)}.json")
|
||||
if os.path.isfile(candidate):
|
||||
return candidate
|
||||
matches = [p for p in os.listdir(path) if p.endswith(".json")]
|
||||
if matches:
|
||||
return os.path.join(path, matches[0])
|
||||
return None
|
||||
|
||||
base = os.path.splitext(os.path.basename(gml_path))[0]
|
||||
cityjson_dir = cfg.buildings.work_cityjson_dir
|
||||
tri_dir = os.path.join(cityjson_dir, "tri")
|
||||
rebased_dir = cfg.buildings.work_rebased_dir
|
||||
os.makedirs(cityjson_dir, exist_ok=True)
|
||||
os.makedirs(tri_dir, exist_ok=True)
|
||||
os.makedirs(rebased_dir, exist_ok=True)
|
||||
|
||||
cityjson_path = os.path.join(cityjson_dir, f"{base}.city.json")
|
||||
tri_path = os.path.join(tri_dir, f"{base}.tri.city.json")
|
||||
rebased_path = os.path.join(rebased_dir, f"{base}.tri.city.json")
|
||||
|
||||
if not os.path.exists(cityjson_path):
|
||||
ok = _run([CITYGML_TOOLS, "to-cityjson", gml_path, "-o", cityjson_path], f"CityGML->CityJSON ({tile_id})")
|
||||
if not ok:
|
||||
return None
|
||||
if not os.path.isfile(cityjson_path):
|
||||
resolved = resolve_cityjson(cityjson_path)
|
||||
if resolved:
|
||||
cityjson_path = resolved
|
||||
if not os.path.exists(tri_path):
|
||||
ok = _run([CJIO_CMD, cityjson_path, "upgrade", "triangulate", "vertices_clean", "save", tri_path], f"triangulate ({tile_id})")
|
||||
if not ok:
|
||||
return None
|
||||
if not os.path.exists(rebased_path):
|
||||
ok = _rebase_cityjson(tri_path, bounds, rebased_path)
|
||||
if not ok:
|
||||
return None
|
||||
return rebased_path
|
||||
|
||||
|
||||
def export_buildings(cfg: Config) -> int:
|
||||
ensure_dir(cfg.buildings.out_dir)
|
||||
if not os.path.exists(cfg.export.manifest_path):
|
||||
raise SystemExit(f"Tile index missing: {cfg.export.manifest_path}. Run heightmap export first.")
|
||||
|
||||
import csv
|
||||
|
||||
written = 0
|
||||
with open(cfg.export.manifest_path, newline="", encoding="utf-8") as handle:
|
||||
reader = csv.DictReader(handle)
|
||||
for row in reader:
|
||||
tile_id = row.get("tile_id")
|
||||
if not tile_id:
|
||||
continue
|
||||
bounds = _bounds_from_row(row)
|
||||
cityjson_path = _ensure_cityjson_for_tile(tile_id, bounds, cfg)
|
||||
if not cityjson_path:
|
||||
continue
|
||||
data = _load_cityjson(cityjson_path)
|
||||
if not data:
|
||||
print(f"[buildings] could not read {cityjson_path}")
|
||||
continue
|
||||
verts, faces_all = _collect_faces(data)
|
||||
if not verts or not faces_all:
|
||||
print(f"[buildings] no geometry for {tile_id}")
|
||||
continue
|
||||
roof_faces_raw, wall_faces_raw = _split_roof_wall(faces_all)
|
||||
vertices = np.array(verts, dtype=np.float32)
|
||||
roof_faces = np.array(roof_faces_raw, dtype=np.uint32) if roof_faces_raw else np.zeros((0, 3), dtype=np.uint32)
|
||||
wall_faces = np.array(wall_faces_raw, dtype=np.uint32) if wall_faces_raw else np.zeros((0, 3), dtype=np.uint32)
|
||||
# Decimate if over budget
|
||||
total = len(roof_faces) + len(wall_faces)
|
||||
if total > 0:
|
||||
max_budget = cfg.buildings.triangle_budget_max
|
||||
min_budget = cfg.buildings.triangle_budget_min
|
||||
target = min(total, max_budget)
|
||||
if target < min_budget and total > min_budget:
|
||||
target = min_budget
|
||||
if total > target:
|
||||
roof_share = len(roof_faces) / total if total else 0.0
|
||||
roof_budget = max(0, int(target * roof_share))
|
||||
wall_budget = max(0, target - roof_budget)
|
||||
if roof_faces.size:
|
||||
roof_faces = _decimate(roof_faces, roof_budget or len(roof_faces))
|
||||
if wall_faces.size:
|
||||
wall_faces = _decimate(wall_faces, wall_budget or len(wall_faces))
|
||||
|
||||
# Ground snap (simple: clamp below-ground vertices up to DTM)
|
||||
try:
|
||||
dgm_ds = gdal.Open(cfg.work.heightmap_vrt)
|
||||
if dgm_ds:
|
||||
gt = dgm_ds.GetGeoTransform()
|
||||
band = dgm_ds.GetRasterBand(1)
|
||||
arr = band.ReadAsArray()
|
||||
nodata = band.GetNoDataValue()
|
||||
for idx, (vx, vy, vz) in enumerate(vertices):
|
||||
wx = vx + bounds[0]
|
||||
wy = vy + bounds[1]
|
||||
col = int((wx - gt[0]) / gt[1])
|
||||
row = int((wy - gt[3]) / gt[5])
|
||||
if 0 <= row < arr.shape[0] and 0 <= col < arr.shape[1]:
|
||||
g = float(arr[row, col])
|
||||
if nodata is not None and g == nodata:
|
||||
continue
|
||||
if vz < g:
|
||||
vertices[idx, 2] = g
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
xmin, ymin, xmax, ymax = bounds
|
||||
w = xmax - xmin
|
||||
h = ymax - ymin
|
||||
if w == 0 or h == 0:
|
||||
print(f"[buildings] invalid bounds for {tile_id}")
|
||||
continue
|
||||
# Convert to glTF-friendly axes: x=east, y=height (z), z=-north
|
||||
source_xy = vertices[:, :2].copy()
|
||||
gltf_vertices = np.zeros_like(vertices)
|
||||
gltf_vertices[:, 0] = vertices[:, 0]
|
||||
gltf_vertices[:, 1] = vertices[:, 2]
|
||||
gltf_vertices[:, 2] = -vertices[:, 1]
|
||||
|
||||
roof_normals = _compute_normals(gltf_vertices, roof_faces) if roof_faces.size else np.zeros_like(gltf_vertices)
|
||||
wall_normals = _compute_normals(gltf_vertices, wall_faces) if wall_faces.size else np.zeros_like(gltf_vertices)
|
||||
uv = np.zeros((len(vertices), 2), dtype=np.float32)
|
||||
uv[:, 0] = source_xy[:, 0] / w
|
||||
uv[:, 1] = 1.0 - (source_xy[:, 1] / h)
|
||||
# Wall colors sampled from ortho if available (fallback constant)
|
||||
wall_color = np.zeros((len(vertices), 3), dtype=np.float32) + 0.75
|
||||
ortho_path = os.path.join(cfg.export.ortho_dir, f"{tile_id}.jpg")
|
||||
ortho_bytes = _load_ortho(tile_id, ortho_path)
|
||||
if ortho_bytes:
|
||||
try:
|
||||
ortho_ds = gdal.Open(ortho_path)
|
||||
if ortho_ds:
|
||||
gt_o = ortho_ds.GetGeoTransform()
|
||||
bands = [ortho_ds.GetRasterBand(i + 1).ReadAsArray() for i in range(min(3, ortho_ds.RasterCount))]
|
||||
for idx, (vx, vy, _) in enumerate(vertices):
|
||||
wx = vx + xmin
|
||||
wy = vy + ymin
|
||||
col = int((wx - gt_o[0]) / gt_o[1])
|
||||
row = int((wy - gt_o[3]) / gt_o[5])
|
||||
if 0 <= row < ortho_ds.RasterYSize and 0 <= col < ortho_ds.RasterXSize:
|
||||
rgb = [bands[i][row, col] for i in range(len(bands))]
|
||||
if rgb:
|
||||
wall_color[idx] = np.array(rgb[:3], dtype=np.float32) / 255.0
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
glb_bytes = _compose_glb(
|
||||
gltf_vertices,
|
||||
roof_normals,
|
||||
uv,
|
||||
roof_faces,
|
||||
gltf_vertices,
|
||||
wall_normals,
|
||||
wall_color,
|
||||
wall_faces,
|
||||
ortho_bytes,
|
||||
cfg.buildings.roof_unlit,
|
||||
)
|
||||
out_path = os.path.join(cfg.buildings.out_dir, f"{tile_id}.glb")
|
||||
ensure_dir(cfg.buildings.out_dir)
|
||||
with open(out_path, "wb") as handle_out:
|
||||
handle_out.write(glb_bytes)
|
||||
written += 1
|
||||
print(f"[buildings] wrote {out_path}")
|
||||
|
||||
print(f"[buildings] Summary: wrote {written} tile GLB(s).")
|
||||
return 0 if written else 1
|
||||
@@ -52,6 +52,33 @@ class OrthoConfig:
|
||||
jpeg_quality: int = 90
|
||||
|
||||
|
||||
@dataclass
|
||||
class BuildingConfig:
|
||||
out_dir: str = "export_unity/buildings_tiles"
|
||||
work_cityjson_dir: str = "work/cityjson_lod2"
|
||||
work_rebased_dir: str = "work/cityjson_lod2_local"
|
||||
work_glb_dir: str = "work/buildings_glb_tmp"
|
||||
triangle_budget_min: int = 200000
|
||||
triangle_budget_max: int = 350000
|
||||
roof_unlit: bool = True
|
||||
|
||||
|
||||
@dataclass
|
||||
class TreeConfig:
|
||||
csv_dir: str = "export_unity/trees"
|
||||
glb_dir: str = "export_unity/trees_tiles"
|
||||
proxy_library: str = "export_unity/tree_proxies.glb"
|
||||
grid_res_m: float = 2.0
|
||||
min_height_m: float = 2.0
|
||||
max_trees: int = 5000
|
||||
chunk_grid: int = 4
|
||||
proxy_variants: int = 16
|
||||
proxy_min_tris: int = 120
|
||||
proxy_max_tris: int = 180
|
||||
building_buffer_m: float = 1.5
|
||||
instancing: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
class Config:
|
||||
raw: RawConfig = field(default_factory=RawConfig)
|
||||
@@ -60,6 +87,8 @@ class Config:
|
||||
export: ExportConfig = field(default_factory=ExportConfig)
|
||||
heightmap: HeightmapConfig = field(default_factory=HeightmapConfig)
|
||||
ortho: OrthoConfig = field(default_factory=OrthoConfig)
|
||||
buildings: BuildingConfig = field(default_factory=BuildingConfig)
|
||||
trees: TreeConfig = field(default_factory=TreeConfig)
|
||||
|
||||
@classmethod
|
||||
def default(cls) -> "Config":
|
||||
@@ -80,6 +109,8 @@ class Config:
|
||||
export=ExportConfig(**data["export"]),
|
||||
heightmap=HeightmapConfig(**data["heightmap"]),
|
||||
ortho=OrthoConfig(**data["ortho"]),
|
||||
buildings=BuildingConfig(**data.get("buildings", {})),
|
||||
trees=TreeConfig(**data.get("trees", {})),
|
||||
)
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
|
||||
723
geodata_pipeline/trees.py
Normal file
723
geodata_pipeline/trees.py
Normal file
@@ -0,0 +1,723 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import csv
|
||||
import glob
|
||||
import json
|
||||
import math
|
||||
import os
|
||||
import struct
|
||||
from hashlib import blake2b
|
||||
from typing import Iterable, List, Tuple
|
||||
|
||||
import numpy as np
|
||||
from numpy.lib.stride_tricks import sliding_window_view
|
||||
from osgeo import gdal, ogr
|
||||
|
||||
from .config import Config
|
||||
from .gdal_utils import build_vrt, ensure_dir, ensure_parent, open_dataset
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
|
||||
def _hash_int(key: str, mod: int) -> int:
|
||||
digest = blake2b(key.encode("utf-8"), digest_size=8).digest()
|
||||
return int.from_bytes(digest, "little") % mod
|
||||
|
||||
|
||||
def _tile_suffix(tile_id: str) -> str:
|
||||
parts = tile_id.split("_")
|
||||
return "_".join(parts[-3:]) if len(parts) >= 3 else tile_id
|
||||
|
||||
|
||||
def _ensure_dom_vrt(dom_dir: str, vrt_path: str) -> str:
|
||||
tif_paths = sorted(glob.glob(os.path.join(dom_dir, "*.tif")))
|
||||
build_vrt(vrt_path, tif_paths, force=False)
|
||||
return vrt_path
|
||||
|
||||
|
||||
def _ensure_dgm_vrt(cfg: Config) -> str:
|
||||
tif_paths = sorted(glob.glob(os.path.join(cfg.raw.dgm1_dir, "*.tif")))
|
||||
build_vrt(cfg.work.heightmap_vrt, tif_paths, force=False)
|
||||
return cfg.work.heightmap_vrt
|
||||
|
||||
|
||||
def _warp_to_tile(src_path: str, bounds: Tuple[float, float, float, float], res: float) -> gdal.Dataset:
|
||||
xmin, ymin, xmax, ymax = bounds
|
||||
width = int(math.ceil((xmax - xmin) / res))
|
||||
height = int(math.ceil((ymax - ymin) / res))
|
||||
opts = gdal.WarpOptions(
|
||||
outputBounds=bounds,
|
||||
xRes=res,
|
||||
yRes=res,
|
||||
width=width,
|
||||
height=height,
|
||||
resampleAlg="bilinear",
|
||||
dstNodata=np.nan,
|
||||
format="MEM",
|
||||
)
|
||||
return gdal.Warp("", src_path, options=opts)
|
||||
|
||||
|
||||
def _building_mask(tile_id: str, bounds: Tuple[float, float, float, float], like_ds: gdal.Dataset) -> np.ndarray | None:
|
||||
suffix = _tile_suffix(tile_id)
|
||||
gml_path = os.path.join("raw", "citygml", "lod2", f"LoD2_{suffix}.gml")
|
||||
if not os.path.exists(gml_path):
|
||||
return None
|
||||
ds = ogr.Open(gml_path)
|
||||
if ds is None or ds.GetLayerCount() == 0:
|
||||
return None
|
||||
driver = gdal.GetDriverByName("MEM")
|
||||
mask_ds = driver.Create("", like_ds.RasterXSize, like_ds.RasterYSize, 1, gdal.GDT_Byte)
|
||||
mask_ds.SetGeoTransform(like_ds.GetGeoTransform())
|
||||
mask_ds.SetProjection(like_ds.GetProjection())
|
||||
band = mask_ds.GetRasterBand(1)
|
||||
band.Fill(0)
|
||||
for idx in range(ds.GetLayerCount()):
|
||||
layer = ds.GetLayer(idx)
|
||||
try:
|
||||
gdal.RasterizeLayer(mask_ds, [1], layer, burn_values=[1])
|
||||
except RuntimeError:
|
||||
continue
|
||||
return band.ReadAsArray()
|
||||
|
||||
|
||||
def _dilate_mask(mask: np.ndarray, radius_px: int) -> np.ndarray:
|
||||
if radius_px <= 0:
|
||||
return mask
|
||||
pad = radius_px
|
||||
padded = np.pad(mask, pad_width=pad, mode="constant", constant_values=0)
|
||||
win = 2 * radius_px + 1
|
||||
windows = sliding_window_view(padded, (win, win))
|
||||
dilated = (windows.max(axis=(2, 3)) > 0).astype(mask.dtype)
|
||||
return dilated
|
||||
|
||||
|
||||
def _local_maxima(chm: np.ndarray, min_height: float, spacing_px: int) -> List[Tuple[int, int, float]]:
|
||||
"""Return list of (row, col, height) for local maxima above min_height with greedy spacing."""
|
||||
if chm.size == 0:
|
||||
return []
|
||||
win = max(3, spacing_px | 1) # odd window
|
||||
pad = win // 2
|
||||
padded = np.pad(chm, pad_width=pad, mode="constant", constant_values=0.0)
|
||||
windows = sliding_window_view(padded, (win, win))
|
||||
local_max = windows.max(axis=(2, 3))
|
||||
mask = (chm >= local_max) & (chm >= min_height) & np.isfinite(chm)
|
||||
candidates = np.argwhere(mask)
|
||||
# Sort by height descending
|
||||
values = chm[mask]
|
||||
order = np.argsort(values)[::-1]
|
||||
selected: list[Tuple[int, int, float]] = []
|
||||
spacing2 = spacing_px * spacing_px
|
||||
coords = candidates[order]
|
||||
heights = values[order]
|
||||
for (r, c), h in zip(coords, heights):
|
||||
too_close = False
|
||||
for sr, sc, _ in selected:
|
||||
dr = sr - r
|
||||
dc = sc - c
|
||||
if dr * dr + dc * dc <= spacing2:
|
||||
too_close = True
|
||||
break
|
||||
if not too_close:
|
||||
selected.append((int(r), int(c), float(h)))
|
||||
return selected
|
||||
|
||||
|
||||
def _local_std(chm: np.ndarray, win: int = 3) -> np.ndarray:
|
||||
if chm.size == 0:
|
||||
return chm
|
||||
pad = win // 2
|
||||
padded = np.pad(chm, pad_width=pad, mode="constant", constant_values=np.nan)
|
||||
windows = sliding_window_view(padded, (win, win))
|
||||
std = np.nanstd(windows, axis=(2, 3))
|
||||
return std
|
||||
|
||||
|
||||
def _proxy_variants(count: int) -> List[Tuple[np.ndarray, np.ndarray]]:
|
||||
"""Return a list of (vertices, indices) proxy meshes. Vertices are (N,3), indices (M,3)."""
|
||||
variants: list[Tuple[np.ndarray, np.ndarray]] = []
|
||||
base_segments = 32
|
||||
for idx in range(count):
|
||||
rng = _hash_int(f"tree_proxy_{idx}", 2**31 - 1)
|
||||
# slight randomization of proportions
|
||||
canopy_scale = 0.8 + (rng % 200) / 1000.0 # 0.8..1.0
|
||||
trunk_radius = 0.10 + ((rng // 10) % 40) / 1000.0 # 0.10..0.14
|
||||
canopy_height = 1.4 * canopy_scale
|
||||
canopy_radius = 0.9 * canopy_scale
|
||||
canopy2_height = 0.9 * canopy_scale
|
||||
canopy2_radius = 0.65 * canopy_scale
|
||||
|
||||
verts: list[Tuple[float, float, float]] = []
|
||||
faces: list[Tuple[int, int, int]] = []
|
||||
|
||||
def add_cylinder(y0: float, y1: float, r: float, segments: int) -> None:
|
||||
start_idx = len(verts)
|
||||
for s in range(segments):
|
||||
angle = 2 * math.pi * s / segments
|
||||
x = r * math.cos(angle)
|
||||
z = r * math.sin(angle)
|
||||
verts.append((x, y0, z))
|
||||
verts.append((x, y1, z))
|
||||
for s in range(segments):
|
||||
i0 = start_idx + 2 * s
|
||||
i1 = start_idx + 2 * s + 1
|
||||
i2 = start_idx + (2 * ((s + 1) % segments))
|
||||
i3 = start_idx + (2 * ((s + 1) % segments) + 1)
|
||||
faces.append((i0, i2, i1))
|
||||
faces.append((i1, i2, i3))
|
||||
|
||||
def add_cone(y0: float, h: float, r: float, segments: int) -> None:
|
||||
start_idx = len(verts)
|
||||
tip_idx = start_idx + segments
|
||||
for s in range(segments):
|
||||
angle = 2 * math.pi * s / segments
|
||||
x = r * math.cos(angle)
|
||||
z = r * math.sin(angle)
|
||||
verts.append((x, y0, z))
|
||||
verts.append((0.0, y0 + h, 0.0))
|
||||
for s in range(segments):
|
||||
i0 = start_idx + s
|
||||
i1 = start_idx + ((s + 1) % segments)
|
||||
faces.append((i0, i1, tip_idx))
|
||||
|
||||
# Trunk from y=0..1
|
||||
add_cylinder(0.0, 1.0, trunk_radius, 16)
|
||||
# Lower canopy
|
||||
add_cone(1.0, canopy_height, canopy_radius, base_segments)
|
||||
# Upper canopy
|
||||
add_cone(1.0 + canopy_height * 0.7, canopy2_height, canopy2_radius, base_segments)
|
||||
|
||||
variants.append((np.array(verts, dtype=np.float32), np.array(faces, dtype=np.uint32)))
|
||||
return variants
|
||||
|
||||
|
||||
def _compose_gltf(chunks: List[Tuple[np.ndarray, np.ndarray]], material_unlit: bool = True) -> bytes:
|
||||
"""Build a minimal GLB with one mesh/node per chunk (combined meshes)."""
|
||||
buffer_views = []
|
||||
accessors = []
|
||||
meshes = []
|
||||
nodes = []
|
||||
bin_data = bytearray()
|
||||
material = {
|
||||
"pbrMetallicRoughness": {"baseColorFactor": [0.35, 0.47, 0.32, 1.0], "metallicFactor": 0.0, "roughnessFactor": 1.0}
|
||||
}
|
||||
extensions_used = []
|
||||
if material_unlit:
|
||||
material.setdefault("extensions", {})["KHR_materials_unlit"] = {}
|
||||
extensions_used.append("KHR_materials_unlit")
|
||||
|
||||
for idx, (verts, faces) in enumerate(chunks):
|
||||
if verts.size == 0 or faces.size == 0:
|
||||
continue
|
||||
# Positions
|
||||
pos_offset = int(math.ceil(len(bin_data) / 4.0) * 4)
|
||||
if pos_offset > len(bin_data):
|
||||
bin_data.extend(b"\x00" * (pos_offset - len(bin_data)))
|
||||
pos_bytes = verts.tobytes()
|
||||
bin_data.extend(pos_bytes)
|
||||
pos_view = {"buffer": 0, "byteOffset": pos_offset, "byteLength": len(pos_bytes)}
|
||||
buffer_views.append(pos_view)
|
||||
pos_min = verts.min(axis=0).tolist()
|
||||
pos_max = verts.max(axis=0).tolist()
|
||||
accessors.append(
|
||||
{
|
||||
"bufferView": len(buffer_views) - 1,
|
||||
"componentType": 5126, # FLOAT
|
||||
"count": len(verts),
|
||||
"type": "VEC3",
|
||||
"min": pos_min,
|
||||
"max": pos_max,
|
||||
}
|
||||
)
|
||||
pos_accessor_idx = len(accessors) - 1
|
||||
|
||||
# Indices
|
||||
idx_offset = int(math.ceil(len(bin_data) / 4.0) * 4)
|
||||
if idx_offset > len(bin_data):
|
||||
bin_data.extend(b"\x00" * (idx_offset - len(bin_data)))
|
||||
idx_bytes = faces.astype(np.uint32).reshape(-1).tobytes()
|
||||
bin_data.extend(idx_bytes)
|
||||
idx_view = {"buffer": 0, "byteOffset": idx_offset, "byteLength": len(idx_bytes)}
|
||||
buffer_views.append(idx_view)
|
||||
accessors.append(
|
||||
{
|
||||
"bufferView": len(buffer_views) - 1,
|
||||
"componentType": 5125, # UNSIGNED_INT
|
||||
"count": faces.size,
|
||||
"type": "SCALAR",
|
||||
}
|
||||
)
|
||||
idx_accessor_idx = len(accessors) - 1
|
||||
|
||||
meshes.append(
|
||||
{
|
||||
"primitives": [
|
||||
{
|
||||
"attributes": {"POSITION": pos_accessor_idx},
|
||||
"indices": idx_accessor_idx,
|
||||
"material": 0,
|
||||
}
|
||||
]
|
||||
}
|
||||
)
|
||||
nodes.append({"mesh": len(meshes) - 1})
|
||||
|
||||
if not nodes:
|
||||
return b""
|
||||
|
||||
gltf = {
|
||||
"asset": {"version": "2.0"},
|
||||
"scene": 0,
|
||||
"scenes": [{"nodes": list(range(len(nodes)))}],
|
||||
"nodes": nodes,
|
||||
"meshes": meshes,
|
||||
"materials": [material],
|
||||
"buffers": [{"byteLength": len(bin_data)}],
|
||||
"bufferViews": buffer_views,
|
||||
"accessors": accessors,
|
||||
}
|
||||
if extensions_used:
|
||||
gltf["extensionsUsed"] = extensions_used
|
||||
|
||||
json_bytes = json.dumps(gltf, separators=(",", ":")).encode("utf-8")
|
||||
# Pad to 4-byte boundaries
|
||||
def pad4(data: bytes) -> bytes:
|
||||
pad_len = (4 - (len(data) % 4)) % 4
|
||||
return data + b"\x20" * pad_len
|
||||
|
||||
json_padded = pad4(json_bytes)
|
||||
bin_padded = pad4(bytes(bin_data))
|
||||
|
||||
total_len = 12 + 8 + len(json_padded) + 8 + len(bin_padded)
|
||||
header = struct.pack("<4sII", b"glTF", 2, total_len)
|
||||
json_header = struct.pack("<I4s", len(json_padded), b"JSON")
|
||||
bin_header = struct.pack("<I4s", len(bin_padded), b"BIN\x00")
|
||||
|
||||
return b"".join([header, json_header, json_padded, bin_header, bin_padded])
|
||||
|
||||
|
||||
def _compose_gltf_instanced(
|
||||
instances: List[List[Tuple[float, float, float, float, float, float]]],
|
||||
proxy: Tuple[np.ndarray, np.ndarray],
|
||||
material_unlit: bool = True,
|
||||
) -> bytes:
|
||||
"""Build a GLB using EXT_mesh_gpu_instancing (one prototype mesh, one node per chunk)."""
|
||||
verts, faces = proxy
|
||||
if verts.size == 0 or faces.size == 0:
|
||||
return b""
|
||||
buffer_views = []
|
||||
accessors = []
|
||||
meshes = []
|
||||
nodes = []
|
||||
bin_data = bytearray()
|
||||
material = {
|
||||
"pbrMetallicRoughness": {"baseColorFactor": [0.35, 0.47, 0.32, 1.0], "metallicFactor": 0.0, "roughnessFactor": 1.0}
|
||||
}
|
||||
extensions_used = ["EXT_mesh_gpu_instancing"]
|
||||
if material_unlit:
|
||||
material.setdefault("extensions", {})["KHR_materials_unlit"] = {}
|
||||
extensions_used.append("KHR_materials_unlit")
|
||||
|
||||
def add_view(data: bytes) -> int:
|
||||
offset = int(math.ceil(len(bin_data) / 4.0) * 4)
|
||||
if offset > len(bin_data):
|
||||
bin_data.extend(b"\x00" * (offset - len(bin_data)))
|
||||
bin_data.extend(data)
|
||||
buffer_views.append({"buffer": 0, "byteOffset": offset, "byteLength": len(data)})
|
||||
return len(buffer_views) - 1
|
||||
|
||||
def add_accessor(view_idx: int, count: int, comp: int, type_str: str, min_val=None, max_val=None) -> int:
|
||||
acc = {"bufferView": view_idx, "componentType": comp, "count": count, "type": type_str}
|
||||
if min_val is not None:
|
||||
acc["min"] = min_val
|
||||
if max_val is not None:
|
||||
acc["max"] = max_val
|
||||
accessors.append(acc)
|
||||
return len(accessors) - 1
|
||||
|
||||
# Prototype mesh
|
||||
pos_view = add_view(verts.astype(np.float32).tobytes())
|
||||
pos_acc = add_accessor(pos_view, len(verts), 5126, "VEC3", verts.min(axis=0).tolist(), verts.max(axis=0).tolist())
|
||||
idx_view = add_view(faces.astype(np.uint32).reshape(-1).tobytes())
|
||||
idx_acc = add_accessor(idx_view, faces.size, 5125, "SCALAR")
|
||||
meshes.append({"primitives": [{"attributes": {"POSITION": pos_acc}, "indices": idx_acc, "material": 0}]})
|
||||
|
||||
# Instances per chunk -> nodes with extension
|
||||
for chunk in instances:
|
||||
if not chunk:
|
||||
continue
|
||||
translations = []
|
||||
rotations = []
|
||||
scales = []
|
||||
for (tx, ty, tz, yaw, sx, sy) in chunk:
|
||||
translations.append((tx, ty, tz))
|
||||
# quaternion for yaw around Y
|
||||
cy = math.cos(yaw * 0.5)
|
||||
syq = math.sin(yaw * 0.5)
|
||||
rotations.append((0.0, syq, 0.0, cy))
|
||||
scales.append((sx, sy, sx))
|
||||
|
||||
def add_inst_attr(data: List[Tuple[float, float, float]], type_str: str) -> int:
|
||||
arr = np.array(data, dtype=np.float32)
|
||||
view = add_view(arr.tobytes())
|
||||
return add_accessor(view, len(data), 5126, type_str)
|
||||
|
||||
trans_acc = add_inst_attr(translations, "VEC3")
|
||||
rot_acc = add_inst_attr(rotations, "VEC4")
|
||||
scale_acc = add_inst_attr(scales, "VEC3")
|
||||
nodes.append(
|
||||
{
|
||||
"mesh": 0,
|
||||
"extensions": {
|
||||
"EXT_mesh_gpu_instancing": {
|
||||
"attributes": {
|
||||
"TRANSLATION": trans_acc,
|
||||
"ROTATION": rot_acc,
|
||||
"SCALE": scale_acc,
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
if not nodes:
|
||||
return b""
|
||||
|
||||
gltf = {
|
||||
"asset": {"version": "2.0"},
|
||||
"scene": 0,
|
||||
"scenes": [{"nodes": list(range(len(nodes)))}],
|
||||
"nodes": nodes,
|
||||
"meshes": meshes,
|
||||
"materials": [material],
|
||||
"buffers": [{"byteLength": len(bin_data)}],
|
||||
"bufferViews": buffer_views,
|
||||
"accessors": accessors,
|
||||
"extensionsUsed": extensions_used,
|
||||
}
|
||||
|
||||
json_bytes = json.dumps(gltf, separators=(",", ":")).encode("utf-8")
|
||||
json_padded = _pad4(json_bytes)
|
||||
bin_padded = _pad4(bytes(bin_data))
|
||||
total_len = 12 + 8 + len(json_padded) + 8 + len(bin_padded)
|
||||
header = struct.pack("<4sII", b"glTF", 2, total_len)
|
||||
json_header = struct.pack("<I4s", len(json_padded), b"JSON")
|
||||
bin_header = struct.pack("<I4s", len(bin_padded), b"BIN\x00")
|
||||
return b"".join([header, json_header, json_padded, bin_header, bin_padded])
|
||||
|
||||
|
||||
def _write_tree_csv(path: str, rows: Iterable[dict]) -> None:
|
||||
ensure_parent(path)
|
||||
with open(path, "w", encoding="utf-8", newline="") as handle:
|
||||
writer = csv.DictWriter(handle, fieldnames=["x_local", "y_local", "z_ground", "height", "radius", "confidence"])
|
||||
writer.writeheader()
|
||||
for row in rows:
|
||||
writer.writerow(row)
|
||||
|
||||
|
||||
def _write_proxy_library(path: str, proxies: List[Tuple[np.ndarray, np.ndarray]]) -> None:
|
||||
"""Export the proxy variants as separate nodes/meshes for reference."""
|
||||
if os.path.exists(path):
|
||||
return
|
||||
buffer_views = []
|
||||
accessors = []
|
||||
meshes = []
|
||||
nodes = []
|
||||
bin_data = bytearray()
|
||||
material = {
|
||||
"pbrMetallicRoughness": {"baseColorFactor": [0.35, 0.47, 0.32, 1.0], "metallicFactor": 0.0, "roughnessFactor": 1.0},
|
||||
"extensions": {"KHR_materials_unlit": {}},
|
||||
}
|
||||
extensions_used = ["KHR_materials_unlit"]
|
||||
|
||||
for verts, faces in proxies:
|
||||
if verts.size == 0 or faces.size == 0:
|
||||
continue
|
||||
pos_offset = int(math.ceil(len(bin_data) / 4.0) * 4)
|
||||
if pos_offset > len(bin_data):
|
||||
bin_data.extend(b"\x00" * (pos_offset - len(bin_data)))
|
||||
pos_bytes = verts.tobytes()
|
||||
bin_data.extend(pos_bytes)
|
||||
buffer_views.append({"buffer": 0, "byteOffset": pos_offset, "byteLength": len(pos_bytes)})
|
||||
pos_min = verts.min(axis=0).tolist()
|
||||
pos_max = verts.max(axis=0).tolist()
|
||||
accessors.append(
|
||||
{
|
||||
"bufferView": len(buffer_views) - 1,
|
||||
"componentType": 5126,
|
||||
"count": len(verts),
|
||||
"type": "VEC3",
|
||||
"min": pos_min,
|
||||
"max": pos_max,
|
||||
}
|
||||
)
|
||||
pos_accessor_idx = len(accessors) - 1
|
||||
|
||||
idx_offset = int(math.ceil(len(bin_data) / 4.0) * 4)
|
||||
if idx_offset > len(bin_data):
|
||||
bin_data.extend(b"\x00" * (idx_offset - len(bin_data)))
|
||||
idx_bytes = faces.astype(np.uint32).reshape(-1).tobytes()
|
||||
bin_data.extend(idx_bytes)
|
||||
buffer_views.append({"buffer": 0, "byteOffset": idx_offset, "byteLength": len(idx_bytes)})
|
||||
accessors.append(
|
||||
{
|
||||
"bufferView": len(buffer_views) - 1,
|
||||
"componentType": 5125,
|
||||
"count": faces.size,
|
||||
"type": "SCALAR",
|
||||
}
|
||||
)
|
||||
idx_accessor_idx = len(accessors) - 1
|
||||
|
||||
meshes.append({"primitives": [{"attributes": {"POSITION": pos_accessor_idx}, "indices": idx_accessor_idx, "material": 0}]})
|
||||
nodes.append({"mesh": len(meshes) - 1})
|
||||
|
||||
if not nodes:
|
||||
return
|
||||
|
||||
gltf = {
|
||||
"asset": {"version": "2.0"},
|
||||
"scene": 0,
|
||||
"scenes": [{"nodes": list(range(len(nodes)))}],
|
||||
"nodes": nodes,
|
||||
"meshes": meshes,
|
||||
"materials": [material],
|
||||
"buffers": [{"byteLength": len(bin_data)}],
|
||||
"bufferViews": buffer_views,
|
||||
"accessors": accessors,
|
||||
"extensionsUsed": extensions_used,
|
||||
}
|
||||
|
||||
json_bytes = json.dumps(gltf, separators=(",", ":")).encode("utf-8")
|
||||
pad = lambda b: b + (b"\x20" * ((4 - (len(b) % 4)) % 4))
|
||||
json_padded = pad(json_bytes)
|
||||
bin_padded = pad(bytes(bin_data))
|
||||
total_len = 12 + 8 + len(json_padded) + 8 + len(bin_padded)
|
||||
header = struct.pack("<4sII", b"glTF", 2, total_len)
|
||||
json_header = struct.pack("<I4s", len(json_padded), b"JSON")
|
||||
bin_header = struct.pack("<I4s", len(bin_padded), b"BIN\x00")
|
||||
ensure_parent(path)
|
||||
with open(path, "wb") as handle:
|
||||
handle.write(b"".join([header, json_header, json_padded, bin_header, bin_padded]))
|
||||
|
||||
|
||||
def _tile_chunks(
|
||||
tile_id: str,
|
||||
trees: List[dict],
|
||||
cfg: Config,
|
||||
tile_bounds: Tuple[float, float, float, float],
|
||||
) -> List[Tuple[np.ndarray, np.ndarray]]:
|
||||
"""Build per-chunk combined meshes using procedural proxies."""
|
||||
if not trees:
|
||||
return []
|
||||
chunk_grid = max(1, cfg.trees.chunk_grid)
|
||||
proxies = _proxy_variants(cfg.trees.proxy_variants)
|
||||
# Estimate base height for proxies (used for scale)
|
||||
base_height = 1.0 + 1.4 + 0.9 + 0.9 # trunk + two canopies (approx)
|
||||
# Group trees into chunks
|
||||
xmin, ymin, xmax, ymax = tile_bounds
|
||||
width = xmax - xmin
|
||||
height = ymax - ymin
|
||||
chunk_w = width / chunk_grid if chunk_grid else width
|
||||
chunk_h = height / chunk_grid if chunk_grid else height
|
||||
|
||||
chunk_lists: list[list[dict]] = [[[] for _ in range(chunk_grid)] for _ in range(chunk_grid)]
|
||||
for tree in trees:
|
||||
cx = min(chunk_grid - 1, max(0, int((tree["x_local"] - xmin) / (chunk_w + 1e-6))))
|
||||
cy = min(chunk_grid - 1, max(0, int((tree["y_local"] - ymin) / (chunk_h + 1e-6))))
|
||||
chunk_lists[cy][cx].append(tree)
|
||||
|
||||
chunk_meshes: list[Tuple[np.ndarray, np.ndarray]] = []
|
||||
for gy in range(chunk_grid):
|
||||
for gx in range(chunk_grid):
|
||||
subset = chunk_lists[gy][gx]
|
||||
if not subset:
|
||||
continue
|
||||
verts_acc: list[Tuple[float, float, float]] = []
|
||||
faces_acc: list[Tuple[int, int, int]] = []
|
||||
for idx, tree in enumerate(subset):
|
||||
variant_idx = _hash_int(f"{tile_id}_{gx}_{gy}_{idx}", cfg.trees.proxy_variants)
|
||||
base_verts, base_faces = proxies[variant_idx]
|
||||
# Scales
|
||||
target_h = max(cfg.trees.min_height_m, tree["height"])
|
||||
radial = max(cfg.trees.grid_res_m * 0.8, tree["radius"])
|
||||
scale_y = target_h / base_height
|
||||
scale_xz = radial / 1.0
|
||||
yaw = (_hash_int(f"yaw_{tile_id}_{gx}_{gy}_{idx}", 3600) / 3600.0) * 2 * math.pi
|
||||
cos_y = math.cos(yaw)
|
||||
sin_y = math.sin(yaw)
|
||||
x0 = tree["x_local"]
|
||||
z0 = tree["y_local"]
|
||||
y0 = tree["z_ground"]
|
||||
for vx, vy, vz in base_verts:
|
||||
# apply scale
|
||||
sx = vx * scale_xz
|
||||
sy = vy * scale_y
|
||||
sz = vz * scale_xz
|
||||
# rotate around Y
|
||||
rx = sx * cos_y - sz * sin_y
|
||||
rz = sx * sin_y + sz * cos_y
|
||||
verts_acc.append((x0 + rx, y0 + sy, - (z0 + rz)))
|
||||
offset = len(verts_acc) - len(base_verts)
|
||||
for f0, f1, f2 in base_faces:
|
||||
faces_acc.append((offset + int(f0), offset + int(f1), offset + int(f2)))
|
||||
chunk_meshes.append((np.array(verts_acc, dtype=np.float32), np.array(faces_acc, dtype=np.uint32)))
|
||||
return chunk_meshes
|
||||
|
||||
|
||||
def _chunk_instances(
|
||||
tile_id: str,
|
||||
trees: List[dict],
|
||||
cfg: Config,
|
||||
tile_bounds: Tuple[float, float, float, float],
|
||||
) -> List[List[Tuple[float, float, float, float, float, float]]]:
|
||||
"""Return per-chunk instance transforms (tx, ty, tz, yaw, sx, sy)."""
|
||||
if not trees:
|
||||
return []
|
||||
chunk_grid = max(1, cfg.trees.chunk_grid)
|
||||
xmin, ymin, xmax, ymax = tile_bounds
|
||||
width = xmax - xmin
|
||||
height = ymax - ymin
|
||||
chunk_w = width / chunk_grid if chunk_grid else width
|
||||
chunk_h = height / chunk_grid if chunk_grid else height
|
||||
chunks: list[list[list[Tuple[float, float, float, float, float, float]]]] = [
|
||||
[[] for _ in range(chunk_grid)] for _ in range(chunk_grid)
|
||||
]
|
||||
base_height = 1.0 + 1.4 + 0.9 + 0.9
|
||||
for idx, tree in enumerate(trees):
|
||||
cx = min(chunk_grid - 1, max(0, int((tree["x_local"] - xmin) / (chunk_w + 1e-6))))
|
||||
cy = min(chunk_grid - 1, max(0, int((tree["y_local"] - ymin) / (chunk_h + 1e-6))))
|
||||
target = chunks[cy][cx]
|
||||
target_h = max(cfg.trees.min_height_m, tree["height"])
|
||||
radial = max(cfg.trees.grid_res_m * 0.8, tree["radius"])
|
||||
scale_y = target_h / base_height
|
||||
scale_xz = radial / 1.0
|
||||
yaw = (_hash_int(f"yaw_{tile_id}_{cx}_{cy}_{idx}", 3600) / 3600.0) * 2 * math.pi
|
||||
target.append((tree["x_local"], tree["z_ground"], -tree["y_local"], yaw, scale_xz, scale_y))
|
||||
flat: list[List[Tuple[float, float, float, float, float, float]]] = []
|
||||
for gy in range(chunk_grid):
|
||||
for gx in range(chunk_grid):
|
||||
flat.append(chunks[gy][gx])
|
||||
return flat
|
||||
|
||||
|
||||
def export_trees(cfg: Config, *, force_vrt: bool = False) -> int:
|
||||
"""Detect trees from DOM1/point clouds and export per-tile CSV + chunked GLB."""
|
||||
ensure_dir(cfg.work.work_dir)
|
||||
ensure_dir(cfg.trees.csv_dir)
|
||||
ensure_dir(cfg.trees.glb_dir)
|
||||
proxies = _proxy_variants(cfg.trees.proxy_variants)
|
||||
_write_proxy_library(cfg.trees.proxy_library, proxies)
|
||||
ensure_parent(cfg.trees.proxy_library)
|
||||
|
||||
if not os.path.exists(cfg.export.manifest_path):
|
||||
raise SystemExit(f"Tile index missing: {cfg.export.manifest_path}. Run heightmap export first.")
|
||||
|
||||
dom_vrt_path = _ensure_dom_vrt("raw/dom1", os.path.join(cfg.work.work_dir, "dom1.vrt"))
|
||||
dgm_vrt_path = _ensure_dgm_vrt(cfg)
|
||||
|
||||
written = 0
|
||||
glb_written = 0
|
||||
|
||||
with open(cfg.export.manifest_path, newline="", encoding="utf-8") as handle:
|
||||
reader = csv.DictReader(handle)
|
||||
for row in reader:
|
||||
try:
|
||||
tile_id = row["tile_id"]
|
||||
xmin = float(row["xmin"])
|
||||
ymin = float(row["ymin"])
|
||||
xmax = float(row["xmax"])
|
||||
ymax = float(row["ymax"])
|
||||
except (KeyError, ValueError) as exc:
|
||||
print(f"[trees] skip malformed row {row}: {exc}")
|
||||
continue
|
||||
|
||||
bounds = (xmin, ymin, xmax, ymax)
|
||||
try:
|
||||
dtm_ds = _warp_to_tile(dgm_vrt_path, bounds, cfg.trees.grid_res_m)
|
||||
dom_ds = _warp_to_tile(dom_vrt_path, bounds, cfg.trees.grid_res_m)
|
||||
except RuntimeError as exc:
|
||||
print(f"[trees] warp failed for {tile_id}: {exc}")
|
||||
continue
|
||||
|
||||
dtm = dtm_ds.ReadAsArray().astype(np.float32)
|
||||
dom = dom_ds.ReadAsArray().astype(np.float32)
|
||||
nodata = dtm_ds.GetRasterBand(1).GetNoDataValue()
|
||||
if nodata is None:
|
||||
nodata = -1e10
|
||||
mask = dtm == nodata
|
||||
chm = dom - dtm
|
||||
chm = np.where(mask, np.nan, chm)
|
||||
chm = np.where(chm < 0, np.nan, chm)
|
||||
|
||||
bmask = _building_mask(tile_id, bounds, dtm_ds)
|
||||
if bmask is not None:
|
||||
px_buffer = int(round(cfg.trees.building_buffer_m / max(cfg.trees.grid_res_m, 0.1)))
|
||||
bmask = _dilate_mask(bmask, px_buffer)
|
||||
chm = np.where(bmask == 1, np.nan, chm)
|
||||
|
||||
spacing_px = max(2, int(math.ceil((cfg.trees.grid_res_m * 2.5) / cfg.trees.grid_res_m)))
|
||||
maxima = _local_maxima(chm, cfg.trees.min_height_m, spacing_px)
|
||||
if not maxima:
|
||||
print(f"[trees] no trees found for {tile_id}")
|
||||
continue
|
||||
# lightweight presence signals
|
||||
has_bdom = bool(list(glob.glob(os.path.join("raw", "bdom20rgbi", f"*{_tile_suffix(tile_id)}*.laz"))))
|
||||
has_lpo = bool(list(glob.glob(os.path.join("raw", "lpo", f"lpo_{_tile_suffix(tile_id)}.xyz"))))
|
||||
|
||||
# Sort by height desc and cap
|
||||
maxima.sort(key=lambda m: m[2], reverse=True)
|
||||
maxima = maxima[: cfg.trees.max_trees]
|
||||
rows_out: list[dict] = []
|
||||
trees_for_mesh: list[dict] = []
|
||||
|
||||
gt = dtm_ds.GetGeoTransform()
|
||||
xres = gt[1]
|
||||
yres = gt[5]
|
||||
rough = _local_std(chm, win=3)
|
||||
for r, c, h in maxima:
|
||||
x = gt[0] + (c + 0.5) * xres
|
||||
y = gt[3] + (r + 0.5) * yres
|
||||
ground = float(dtm[r, c])
|
||||
if not math.isfinite(ground):
|
||||
continue
|
||||
radius = max(cfg.trees.grid_res_m, min(6.0, h * 0.25))
|
||||
roughness = rough[r, c] if rough.size else 0.0
|
||||
confidence = 0.6 * (h / 30.0) + 0.4 * min(1.0, roughness / 5.0)
|
||||
if has_bdom:
|
||||
confidence += 0.05
|
||||
if has_lpo:
|
||||
confidence += 0.05
|
||||
confidence = min(1.0, confidence)
|
||||
row_out = {
|
||||
"x_local": x - xmin,
|
||||
"y_local": y - ymin,
|
||||
"z_ground": ground,
|
||||
"height": h,
|
||||
"radius": radius,
|
||||
"confidence": confidence,
|
||||
}
|
||||
rows_out.append(row_out)
|
||||
trees_for_mesh.append(row_out)
|
||||
|
||||
csv_path = os.path.join(cfg.trees.csv_dir, f"{tile_id}.csv")
|
||||
_write_tree_csv(csv_path, rows_out)
|
||||
written += 1
|
||||
glb_path = os.path.join(cfg.trees.glb_dir, f"{tile_id}.glb")
|
||||
ensure_parent(glb_path)
|
||||
if cfg.trees.instancing:
|
||||
instances = _chunk_instances(tile_id, trees_for_mesh, cfg, bounds)
|
||||
glb_bytes = _compose_gltf_instanced(instances, proxies[0], material_unlit=True)
|
||||
else:
|
||||
chunk_meshes = _tile_chunks(tile_id, trees_for_mesh, cfg, bounds)
|
||||
glb_bytes = _compose_gltf(chunk_meshes, material_unlit=True)
|
||||
if glb_bytes:
|
||||
with open(glb_path, "wb") as handle_glb:
|
||||
handle_glb.write(glb_bytes)
|
||||
glb_written += 1
|
||||
print(f"[trees] wrote {csv_path} and {glb_path}")
|
||||
else:
|
||||
print(f"[trees] wrote {csv_path} (no GLB, empty chunks)")
|
||||
|
||||
print(f"[trees] Summary: wrote {written} CSV(s); wrote {glb_written} GLB(s).")
|
||||
return 0 if written else 1
|
||||
@@ -8,9 +8,11 @@ import sys
|
||||
from typing import Iterable
|
||||
|
||||
from geodata_pipeline.config import Config, DEFAULT_CONFIG_PATH, ensure_default_config
|
||||
from geodata_pipeline.buildings import export_buildings
|
||||
from geodata_pipeline.heightmaps import export_heightmaps
|
||||
from geodata_pipeline.orthophotos import export_orthophotos
|
||||
from geodata_pipeline.setup_helpers import ensure_directories, materialize_archives
|
||||
from geodata_pipeline.trees import export_trees
|
||||
|
||||
|
||||
def parse_args(argv: Iterable[str] | None = None) -> argparse.Namespace:
|
||||
@@ -22,7 +24,7 @@ def parse_args(argv: Iterable[str] | None = None) -> argparse.Namespace:
|
||||
)
|
||||
parser.add_argument(
|
||||
"--export",
|
||||
choices=["heightmap", "textures", "all"],
|
||||
choices=["heightmap", "textures", "buildings", "trees", "all"],
|
||||
default="all",
|
||||
help="Which assets to export.",
|
||||
)
|
||||
@@ -77,6 +79,10 @@ def main(argv: Iterable[str] | None = None) -> int:
|
||||
exit_codes.append(export_heightmaps(cfg, force_vrt=args.force_vrt))
|
||||
if args.export in ("textures", "all"):
|
||||
exit_codes.append(export_orthophotos(cfg, force_vrt=args.force_vrt))
|
||||
if args.export in ("buildings", "all"):
|
||||
exit_codes.append(export_buildings(cfg))
|
||||
if args.export in ("trees", "all"):
|
||||
exit_codes.append(export_trees(cfg))
|
||||
|
||||
return max(exit_codes) if exit_codes else 0
|
||||
|
||||
|
||||
Reference in New Issue
Block a user