Skip to content

Forked team project developed during a hackathon. I contributed to UI/UX design, layout structuring, and minor enhancements while collaborating with the core development team.

Notifications You must be signed in to change notification settings

devashishgorai/jet_engine

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

22 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

This repository is forked from Mayank8159/jet_engine as part of a team hackathon project.

AeroGuard: Jet Engine Predictive Maintenance

AeroGuard is an AI-powered intelligence terminal designed for real-time monitoring and Remaining Useful Life (RUL) prediction of jet engines. By leveraging a Deep Learning LSTM (Long Short-Term Memory) model, the system processes multi-dimensional sensor telemetry to predict engine failure before it occurs.

๐Ÿ›  Project Architecture & Workflow

The system is split into two primary layers: the Inference Engine (Python) and the Intelligence Terminal (Next.js).

1. Data Processing

  • Input: 24 sensor channels (Temperatures, Pressures, Fan Speeds) over a 30-cycle time window.
  • Normalization: Data is passed through a StandardScaler to match the training distribution of the CMAPSS dataset.

2. Backend (FastAPI + TensorFlow)

  • Inference: The LSTM model analyzes temporal patterns in sensor degradation.
  • Decision Logic: The RUL output is converted into a Health Index (%) and a Status Grade (A/B/C).
  • REST API: Exposes a /predict endpoint that accepts 30x24 matrices.

3. Frontend (Next.js 15 + Tailwind CSS)

  • Fleet View: A macro-level dashboard for managing multiple assets and triaging critical units.
  • Single Engine View: A micro-level deep dive allowing manual telemetry input and sensor impact analysis.

๐Ÿš€ Getting Started

Backend Setup (Python)

  1. Navigate to the /backend directory.
  2. Install dependencies:
pip install fastapi uvicorn tensorflow joblib numpy
  1. Start the server:
uvicorn main:app --reload --port 8000

Frontend Setup (Next.js)

  1. Navigate to the /frontend directory.
  2. Install dependencies:
npm install
  1. Run the development server:
npm run dev
  1. Access the dashboard at http://localhost:3000.

๐Ÿ“Š System Workflow

Step Action Description
1 Telemetry Ingest 30 time-steps of 24 sensor values are collected.
2 JSON POST Frontend sends data to localhost:8000/predict.
3 LSTM Inference Model predicts RUL (cycles remaining).
4 Risk Mapping Backend calculates Status (Healthy/Warning/Critical).
5 Visualization Dashboard renders degradation trends and impact scores.

๐Ÿ“ Key Components

  • main.py: The FastAPI server handling model loading and inference logic.
  • SingleEngine.tsx: Interactive component for deep-dive analysis.
  • FleetDashboard.tsx: Grid-based overview for operational triage.
  • parseCSV.ts: Strict data cleaning utility to ensure 30x24 shape validation.

โš ๏ธ Requirements

  • Python 3.9+
  • Node.js 18+
  • Model Assets: Ensure lstm_rul_model.h5 and scaler.pkl are in the backend/models/ folder.

About

Forked team project developed during a hackathon. I contributed to UI/UX design, layout structuring, and minor enhancements while collaborating with the core development team.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 76.9%
  • Python 20.9%
  • JavaScript 1.2%
  • CSS 1.0%