Features
Know what's running on your network — every port, every change, every scan — auditable from day one.
Scanning & Discovery
Schedule and execute TCP port scans across IPv4 and IPv6 subnets from multiple network vantage points simultaneously. The dispatcher resolves templates into concrete targets, expands CIDR ranges with deduplication, batches jobs across scanners, and monitors for stalls — so operators work with named groups while scanners receive fully-resolved jobs. Every result is timestamped in TimescaleDB for historical diffing.
Select a scan template and launch it on demand. The UI shows real-time progress — ports scanned, hosts completed, and scan status — updated via polling as scanners report results back through the ingest nodes.
View all active and completed scans in a sortable, filterable table. Each scan shows its template, assigned scanners, start/end timestamps, and completion percentage. Filter by status, scanner, or date range.
Query historical scan results stored in TimescaleDB. Compare results across time to detect newly opened or closed ports. Filter by subnet, port, scanner, or time window. Results are retained indefinitely for compliance evidence.
Attach a schedule to any scan template using cron expressions or preset cadences (daily, weekly, monthly). Configurable jitter window randomizes start times to avoid predictable scan patterns. The dispatcher evaluates schedules and launches scans automatically.
Group scanners by network location and scan the same targets from every scanner in the group simultaneously. Each result is tagged with the originating scanner ID, enabling side-by-side comparison of what each network vantage point can see.
Large scan scopes are split into batches sized to each scanner's capacity. A background reconciler monitors in-flight scans for stalls — jobs that haven't reported progress within the expected window emit recovery events and can be reassigned to healthy scanners.
Scanner Architecture
Most scanning tools require you to trust a third-party agent with network access and automatic updates. mipo's scanner is a single Go binary with zero dependencies — no node_modules, no transitive supply chain risk. It doesn't phone home, doesn't self-update, and the entire source fits in one directory. You can read every line of code that runs on your network.
The scanner is a single statically-linked executable built entirely from Go standard library packages — zero third-party dependencies, zero entries in go.mod. The entire codebase is small enough to audit in one sitting, which matters when you are deploying it inside customer networks.
The management UI generates a curl one-liner containing a short-lived provisioning token. Running it on the target host registers the scanner and exchanges the token for a permanent API key. The provisioning token expires after first use or after a configurable timeout.
Each scanner sends a heartbeat to the ingest API every 60 seconds. If no heartbeat is received for 2 minutes, the scanner is automatically marked offline and a system event is emitted. The health dashboard shows last-seen timestamps and connection history for every scanner.
Lock each scanner to a specific source IP address, CIDR subnet, or ASN. The ingest API validates the source IP of every heartbeat and result submission against the configured binding. Connections from unauthorized IPs are rejected and logged.
Generate consolidated firewall allow-rules for all registered scanners in iptables, nginx allow/deny, and Cloudflare IP Access Rule formats. Useful for operations teams who need to allowlist scanner IPs across network infrastructure.
Soft-disable a scanner without deleting it. Disabled scanners retain their configuration, API key, and scan history but are excluded from new job assignments. Re-enable at any time to resume scanning without re-provisioning.
Revoke a scanner's API key and issue a new one without re-running the provisioning flow. The old key is immediately invalidated. Useful for key rotation policies or when a key may have been exposed.
Scanners report CPU load, memory usage, and network I/O alongside each heartbeat. These metrics are displayed on the scanner fleet health page, enabling operators to detect resource-constrained scanners before they start dropping jobs.
Audit & Compliance
Audit logging isn't a plugin you enable — it's a global interceptor that records every state change before the API response is sent. Configuration edits, login attempts, data access, role assignments — all captured with actor identity, timestamp, and field-level diffs. Your SOC 2 auditor gets complete evidence without anyone remembering to turn logging on.
Every create, update, and delete operation is recorded with the acting user, timestamp, affected resource, and a field-level diff showing exactly what changed. The log is append-only — records cannot be modified or deleted, even by the owner account.
Records every login attempt (success and failure), logout, and session expiration for both local and OIDC authentication methods. Each event includes the source IP address, user agent, and authentication method used.
Tracks when users view sensitive resources such as scanner API keys, user profile details, and configuration secrets. Provides a read-access audit trail required by compliance frameworks that mandate data access logging.
Download the full audit log or a filtered subset as structured data for ingestion into your SIEM, compliance reporting tool, or long-term archival storage. Useful for producing evidence during SOC2 or ISO 27001 audits.
Compliance Mapping
This mapping is a starting point for evaluators — your auditor determines which controls are satisfied in your environment.
RBAC scopes, OIDC/SSO, owner bootstrap, session management
CC6.1, CC6.2 / A.9.2, A.9.4
Append-only audit trail, actor + timestamp, data access views
CC7.2, CC7.3 / A.12.4
Config version tracking, scan scope history
CC8.1 / A.12.1.2
15 built-in alarm rules, scanner heartbeats, health endpoints
CC7.1 / A.17.1
Subnet inventory, port catalog, scanner registry
CC6.6 / A.8.1
TLS everywhere, backup encryption, credential encryption at rest
CC6.7 / A.10.1
Alarm lifecycle (open→acknowledged→resolved), notification channels
CC7.4 / A.16.1
Identity & Access Control
Access control is enforced at the API layer, not the GUI. Every request is checked against resource:action scopes before it reaches a handler. The GUI simply reflects what the API permits — there are no client-side-only restrictions to bypass.
Create, edit, and delete user accounts. Each user can authenticate via local credentials, an external OIDC identity provider, or both. User list is sortable and filterable by role and authentication method.
Define custom roles using granular resource:action scopes such as scans:execute, scanners:manage, and audit:view. Scopes are enforced at the API middleware layer — the GUI simply reflects what the API permits. Roles can be assigned to multiple users.
Three immutable default roles ship with every installation: admin (all scopes), operator (scan execution, scanner viewing, and alerting), and viewer (read-only access to a safe subset of resources). Built-in roles cannot be modified or deleted, ensuring a known-good baseline.
Connect any standards-compliant OpenID Connect identity provider (Okta, Azure AD, Google Workspace, Keycloak, etc.). Supports automatic account linking by email and mapping IdP groups or claims to mipo roles for zero-touch provisioning.
A special bootstrap account created during initial setup that bypasses all RBAC scope checks. Intended for emergency access and first-time configuration. Cannot be deleted or have its permissions reduced.
Authenticated sessions use signed tokens with configurable expiry. Expired sessions are cleaned up automatically on a background schedule. All session creation and expiration events are written to the authentication audit log with the associated IP address and user agent.
Configuration
Scan targets are defined as named objects that compose into templates. Subnets, port lists, and scanner assignments combine into reusable scan templates that can be scheduled or triggered on demand. Scope changes are version-tracked so you can audit what was in scope during any historical scan.
Define IPv4 and IPv6 network ranges in CIDR notation. The UI calculates host counts and validates notation on input. Subnets are named objects that can be referenced by multiple scan templates, so renaming a network updates every template that uses it.
Organize subnets into named collections — for example, "production-dmz" or "branch-offices". Groups can be assigned to scan templates instead of individual subnets, simplifying configuration when the same set of networks is scanned by multiple templates.
A built-in reference library mapping well-known TCP ports to their service names and descriptions (e.g., 443/HTTPS, 5432/PostgreSQL). Used as a lookup when building port lists, and to enrich scan results with human-readable service labels.
Named collections of TCP ports that can be shared across scan templates. Define lists like "web-standard" (80, 443, 8080, 8443) or "database-ports" (3306, 5432, 27017) once and reference them everywhere.
Combine multiple port lists into a single comprehensive scan profile. For example, merge "web-standard", "database-ports", and "admin-services" into a "full-audit" group that covers all three sets of ports in one scan.
Automatically discover assets across your infrastructure by importing from external sources such as cloud provider APIs and CMDB exports. Discovery runs on a configurable schedule and auto-populates subnets into scan scope, ensuring coverage keeps pace with infrastructure changes without manual updates.
Reusable scan definitions that combine target scope (subnets or subnet groups), port lists, scanner assignments, and timing parameters into a single named configuration. Templates are the primary unit of work — schedules and on-demand scans both reference templates.
Bundle multiple scan templates into a group for batch scheduling. When a schedule triggers a template group, all templates in the group are launched together, enabling coordinated multi-scope scanning with a single schedule.
Health & Alerting
mipo continuously monitors its own infrastructure — manager, ingest nodes, databases, proxy, DNS, TLS certificates, backups, and the scanner fleet — and surfaces faults as structured alarms with severity levels and acknowledgment workflow. You don't monitor mipo with a separate tool; mipo monitors itself.
Customizable widget-based dashboards that provide at-a-glance visibility into scan activity, scanner health, and system status. Add, remove, resize, and rearrange widgets to build views tailored to different roles — NOC operators, security analysts, and compliance reviewers each see what matters to them.
Dedicated health pages for every component: manager process metrics, per-node ingest throughput, database connection pool utilization, Traefik proxy status, DNS resolution across public/internal/container layers, TLS certificate validity, and backup schedule adherence. All surfaced through a visual architecture overview with real-time status indicators.
Aggregated view of all registered scanners showing online/offline status, last heartbeat timestamps, system metrics (CPU, memory, network I/O), and active job assignments. Highlights scanners that have gone offline or are not picking up jobs.
Stateful, deduplicated alarms with a lifecycle of open, acknowledged, and resolved. When a fault is detected, an alarm is opened or bumped (if already open). Operators can acknowledge alarms to signal they are investigating. Alarms auto-resolve when the corresponding recovery event arrives.
15 built-in rules covering database connectivity, scanner heartbeat loss, TLS certificate expiry, backup schedule failures, DNS resolution, and more. Each rule maps an event pattern to an alarm with a default severity. Operators can override severity levels or disable individual rules without modifying code.
Deliver alarm events to outgoing webhooks (any HTTP endpoint), Slack (via incoming webhook URL), or email (via SMTP). Each channel has a built-in test button. Webhook requests include an HMAC signature header for verification. Failed deliveries retry with exponential backoff.
Define routing rules that determine which alarm transitions (opened, escalated, resolved) are sent to which channels. Filter by alarm severity, specific alarm rules, or both. A read-only notification log tracks every delivery attempt with status codes and error messages.
Security & Operations
mipo is designed to go from a fresh server to a running, encrypted, monitored system with minimal manual steps. Secrets are auto-generated, TLS is enabled by default, backups are encrypted at rest, and three purpose-built databases (config, time-series results, transient jobs) isolate concerns by access pattern.
On first launch, when no owner account exists, all routes redirect to a setup wizard that walks you through creating the owner account and configuring the initial settings. The wizard is only accessible when the system has zero users — once an owner is created, the setup endpoints are permanently disabled.
The first time mipo starts, it generates a .mipo.env file containing cryptographically random database passwords, session signing salts, and encryption keys. Operators never need to invent or manage these secrets manually — they are created once and persisted on the host.
All external traffic is TLS-terminated at the reverse proxy. Self-signed certificates are generated automatically for day-one encryption. Upload your own PEM certificate, or enable automated Let's Encrypt/ACME renewal with built-in support for Cloudflare, Route 53, and OVH DNS providers.
Configuration and scan results are backed up on independent schedules with configurable retention (1–365 days). All backups are AES-encrypted at rest. Replicate to AWS S3 or any S3-compatible store (MinIO, Backblaze B2, Wasabi). On-demand triggers and full restore through the management UI.
A settings page allows the administrator to configure the public-facing FQDN used in scanner provisioning URLs and OIDC callback redirects. Falls back to the hostname from the incoming request if not explicitly set, so the system works out of the box in simple deployments.
Transparent by Design
mipo is built for environments where operators need to understand, audit, and explain every component. No black boxes, no hidden state, no vendor lock-in.
The full source code — manager, ingest, dispatcher, scanner, website, shared libraries, CI/CD pipelines, Docker configurations, and test suites — is available in a single repository. Nothing is compiled from a private source or gated behind a license key.
Every action the GUI can perform is available as a documented API endpoint. The GUI is a consumer of the same public API that automation scripts and third-party integrations use. There are no hidden admin endpoints or GUI-only capabilities.
Scanners do not phone home for updates, download code at runtime, or self-modify. Every binary running on your network was explicitly placed there by your team. Updates happen on your schedule through your deployment process.
mipo does not collect usage analytics, send crash reports, phone home to a license server, or transmit any data to external services. All data stays within your infrastructure. The only outbound connections are ones you explicitly configure (S3 backups, ACME certificates, notification webhooks).
Every GUI page has a matching documentation page at the same path under /docs. Each page also includes a collapsible QuickHelp section with field descriptions, how-to steps, and gotchas — sourced from the same data file so the help text and full docs always agree.