Skip to Main Content

Job Title


Junior Computer Vision & Robotics Engineer


Company : SkyMul


Location : Kochi, Kerala


Created : 2026-05-02


Job Type : Full Time


Job Description

About SkyMulWe build real‑world systems that survive dust, heat, rain, and deadlines. We don’t chase demos; we ship machines that last. Today we enable remote QC—sensor fusion to precise 3D so changes can be reviewed from anywhere. Next we go hands‑on at a distance: telepresence robotics with ROS2, bulletproof power, and live telemetry. If you want the full pipeline—hardware → firmware → perception → decision → actuation—this is the playground, and it’s production.Explore what we’ve built:Robotics solutions: /robotics#solutionsDemo video: YouTubeTeam culture & how we workBuilders first and in person: passionate, curious, and relentlessly hands‑on—you prototype, break, measure, and rebuild at the bench and in the field.Not cloning tech for a local market: we build for the world and take on problems not solved elsewhere.Failures are data, no rulebook: undefined hard problems, learnings shared openly, failures turned into progress.Learn at lightning speed: self‑teach new tools, read papers, ship working systems in days—not months.No pedigree gating: degrees and years don’t decide; evidence of hard builds, clear thinking, and character do.Benevolent teammates only: we push hard and help harder—zero tolerance for ego or toxicity.What you’ll doBuild CV/3D pipelines end‑to‑end: calibration, feature extraction, multi‑view geometry, reconstruction, reprojection checks.Translate math into code: implement geometric algorithms from first principles—SVD, least squares, RANSAC, triangulation, PnP, bundle adjustment seeds—without an LLM doing the thinking for you.Wire perception into the robot: pull camera/IMU/LiDAR streams together, manage TF2 frames and time‑sync, debug transform trees.Prototype hard, measure harder: instrument experiments, log everything, write short technical notes the rest of the team can read.Read papers and ship: turn ideas from FPCV / CS231A / 16‑822 into working scripts within days.Must‑haveExcellent linear algebra intuition: not memorized formulas—you can explain SVD geometrically, derive least squares, manipulate rotations in SO(3)/SE(3), and recognize when a problem is rank‑deficient.Hands‑on grasp of the three CV courseware (lectures + assignments worked through):First Principles of Computer Vision (Columbia, Shree Nayar) — fpcv.cs.columbia.edu and YouTube.Stanford CS231A — course notes + public problem sets.CMU 16‑822 Geometry‑based Methods in Vision — geometric3d.github.io.Heavy coding muscle, low AI dependence: you can implement a fundamental matrix estimator, calibrate a camera, or write a small SLAM loop without copy‑pasting from an LLM. AI tools are welcome to accelerate—not to replace understanding. We will probe this in interview.Speed then rigor: prototype quickly to learn, then harden to field‑grade—own the path from scrappy v0 to reliable v1+.Linux, Git, Python fluency; basic C++ comfortable enough to read and modify ROS2 nodes.Methodical debugging: structured experiments, ablations, scopes, logs, reproducible results.Nice‑to‑have (Robotics is a strong bonus)Robotics work — any prior build: robot controls, manipulators, drones, autonomous systems, ROVs, even a serious hobby project.ROS2 fluency: nodes, launch, TF2, bag handling, diagnostics.SLAM/VO, depth fusion, NeRF/3DGS, or on‑edge inference (Jetson/NPU).Numerical optimization (Ceres, g2o, GTSAM) and sensor calibration tooling.Multi‑sensor calibration and time‑sync experience.What success looks likeReproducible perception components with clear math, clear interfaces, clear tests.Clean ROS2 integration of your CV outputs—no surprise frames, no silent NaNs, defensible metrics.Short technical write‑ups that explain decisions and trade‑offs in plain language.Recommended prep (use this before and during onboarding)Computer Vision (do all three; FPCV first for video lectures)First Principles of Computer Vision (Columbia / Shree Nayar) — primary lectures. Free videos + free monograph PDFs. Watch the 3D Reconstruction I & II courses end‑to‑end. → fpcv.cs.columbia.edu · YouTubeStanford CS231A — best free written problem sets. Use the public course notes and ps1/ps2/ps3 PDFs as your homework. → course notes · course siteCMU 16‑822 Geometry‑based Methods in Vision — best free coding assignments. Work through the multi‑view recon problem sets. → geometric3d.github.ioLinear Algebra (level: upper‑undergrad, with SVD non‑negotiable)3Blue1Brown — Essence of Linear Algebra — visual intuition pass; do this first if rusty. → YouTube seriesMIT 18.06 (Gilbert Strang) — canonical depth, full lectures, exams, and assignments. → MIT OCWROB 101 Computational Linear Algebra (Michigan Robotics) — coding‑first, robotics‑flavored, Jupyter notebooks. → GitHubRequired comfort: vector spaces, rank/null‑space, SVD, eigendecomposition, orthogonal projections, least squares, rotation matrices, SO(3)/SE(3), numerical conditioning. Should be able to derive and implement, not just recognize.Location & work modeCochin, India; in‑person.Compensation & growthCompetitive, high ownership, and rapid growth across the full stack.Equity/stock options at an early‑stage startup, performance‑based grants and refreshers.How to applySend your resume plus links (portfolio/GitHub/videos/photos/papers) and 5–10 lines on your toughest build—problem, constraints, key decisions, outcome. Links preferred. If you’ve worked through any of the three CV courses or the linear algebra resources above, share your code/notes—that goes a long way.A note on the titleWe evaluate builds and character—not degrees or years./"Junior/" doesn’t mean anything here. The word in the title is a deliberate filter. If /"Junior/" stings or you’re here to optimize for the next title bump, this isn’t your seat. If you’re here to ship hard things and let the work speak, we don’t care what we call the role.