Welcome to Robot Streets’ AI and Machine Learning Foundations—the launchpad where perception, planning, and learning come together to make robots truly intelligent. Here, we break down the core ideas that power modern autonomy: from classic algorithms and feature engineering to deep neural networks that see, listen, and decide in real time. Explore supervised, unsupervised, and reinforcement learning, discover how datasets are built and cleaned, and learn why evaluation metrics matter as much as models. We’ll demystify sensor fusion for situational awareness, dive into vision and speech pipelines, and show how mapping, localization, and trajectory planning connect brains to motion. You’ll compare cloud vs. edge inference, understand latency budgets, and peek into MLOps—versioning, deployment, and monitoring that keep models reliable in the field. We also spotlight safety, robustness, and bias mitigation so your robots behave predictably in a messy world. Whether you’re a curious beginner or tuning your tenth model, this hub turns complex AI concepts into practical, build-ready knowledge.
A: Begin with a simple baseline; add complexity only if metrics justify it.
A: Edge lowers latency & bandwidth; cloud eases updates and heavy compute.
A: Enough to cover variability; measure by performance plateaus, not a fixed count.
A: Regularize, augment, early-stop, and validate on held-out data.
A: Reweight loss, resample, or collect more positives; use PR curves.
A: Data drift/leakage—recheck splits, monitor post-deploy, refine labels.
A: Quantize, prune, batch, and fuse ops; use hardware acceleration.
A: Use saliency/SHAP, log decisions, and design for human overrides.
A: On drift alarms or schedule (e.g., monthly/quarterly) with shadow tests.
A: Match to risk: F1/AUPRC for rare events, ROC-AUC for balanced classes.
