Loading

Hi, I'm Andrei

I build

About

Bridging the gap between mathematical theory and rigorous systems engineering.

I'mAndrei,a20-year-oldAIArchitect,MLOpsEngineer,andSoftwareDeveloper.ThiswebsiteismyinteractivebusinesscardandIhopeyouwilllikestayinghere!

Icombinedeepmathematicalfoundationswithrigoroussystemsengineeringtobuildintelligentarchitecturesthatsolvereal-worldproblems.Myexpertisecoversthefulllifecycle,fromdesigningcustomneuralnetworkstoorchestratingdistributedcloudinfrastructure.Ifocusoncreatingresilientecosystemswheremachinelearningmodelsintegrateseamlesslywithrobust,scalablesoftware.

Iamdrivenbyahungerfordifficultproblemsandaconstantdesiretoevolve.Whetherdesigningasystemorleadingatechnicalinitiative,Ibringcreativityandpurposetomywork.Iamreadytojointeamsthatmovefastandbuildsomethingmeaningful.

Skills & Technologies

Visit theto explore all skills!
PyTorch
TensorFlow
Hugging Face
LangChain
RAG Systems
Scikit-learn
OpenCV
spaCy
JAX
ONNX
Optuna
NumPy
MLflow
Kubeflow
Ray
DVC
Feature Stores
Model Serving
A/B Testing
Vector Databases
Keras
Weights & Biases
Prompt Engineering
Fine-tuning
FAISS
Pinecone
PyTorch
TensorFlow
Hugging Face
LangChain
RAG Systems
Scikit-learn
OpenCV
spaCy
JAX
ONNX
Optuna
NumPy
MLflow
Kubeflow
Ray
DVC
Feature Stores
Model Serving
A/B Testing
Vector Databases
Keras
Weights & Biases
Prompt Engineering
Fine-tuning
FAISS
Pinecone
PyTorch
TensorFlow
Hugging Face
LangChain
RAG Systems
Scikit-learn
OpenCV
spaCy
JAX
ONNX
Optuna
NumPy
MLflow
Kubeflow
Ray
DVC
Feature Stores
Model Serving
A/B Testing
Vector Databases
Keras
Weights & Biases
Prompt Engineering
Fine-tuning
FAISS
Pinecone
PyTorch
TensorFlow
Hugging Face
LangChain
RAG Systems
Scikit-learn
OpenCV
spaCy
JAX
ONNX
Optuna
NumPy
MLflow
Kubeflow
Ray
DVC
Feature Stores
Model Serving
A/B Testing
Vector Databases
Keras
Weights & Biases
Prompt Engineering
Fine-tuning
FAISS
Pinecone
PyTorch
TensorFlow
Hugging Face
LangChain
RAG Systems
Scikit-learn
OpenCV
spaCy
JAX
ONNX
Optuna
NumPy
MLflow
Kubeflow
Ray
DVC
Feature Stores
Model Serving
A/B Testing
Vector Databases
Keras
Weights & Biases
Prompt Engineering
Fine-tuning
FAISS
Pinecone
AWS
GCP
Azure
Docker
Kubernetes
Terraform
CI/CD
Linux
Grafana
Prometheus
PostgreSQL
MySQL
MongoDB
Redis
Milvus
Weaviate
Qdrant
Elasticsearch
AWS
GCP
Azure
Docker
Kubernetes
Terraform
CI/CD
Linux
Grafana
Prometheus
PostgreSQL
MySQL
MongoDB
Redis
Milvus
Weaviate
Qdrant
Elasticsearch
AWS
GCP
Azure
Docker
Kubernetes
Terraform
CI/CD
Linux
Grafana
Prometheus
PostgreSQL
MySQL
MongoDB
Redis
Milvus
Weaviate
Qdrant
Elasticsearch
AWS
GCP
Azure
Docker
Kubernetes
Terraform
CI/CD
Linux
Grafana
Prometheus
PostgreSQL
MySQL
MongoDB
Redis
Milvus
Weaviate
Qdrant
Elasticsearch
AWS
GCP
Azure
Docker
Kubernetes
Terraform
CI/CD
Linux
Grafana
Prometheus
PostgreSQL
MySQL
MongoDB
Redis
Milvus
Weaviate
Qdrant
Elasticsearch
Python
Java
JavaScript
TypeScript
C/C++
SQL
Flask
FastAPI
Spring Boot
Node.js
Vue.js
Git
REST APIs
gRPC
System Design
Microservices
Design Patterns
Data Structures
Testing
Agile/Scrum
WebSocket
Go
Rust
Scala
PHP
Haskell
Python
Java
JavaScript
TypeScript
C/C++
SQL
Flask
FastAPI
Spring Boot
Node.js
Vue.js
Git
REST APIs
gRPC
System Design
Microservices
Design Patterns
Data Structures
Testing
Agile/Scrum
WebSocket
Go
Rust
Scala
PHP
Haskell
Python
Java
JavaScript
TypeScript
C/C++
SQL
Flask
FastAPI
Spring Boot
Node.js
Vue.js
Git
REST APIs
gRPC
System Design
Microservices
Design Patterns
Data Structures
Testing
Agile/Scrum
WebSocket
Go
Rust
Scala
PHP
Haskell
Python
Java
JavaScript
TypeScript
C/C++
SQL
Flask
FastAPI
Spring Boot
Node.js
Vue.js
Git
REST APIs
gRPC
System Design
Microservices
Design Patterns
Data Structures
Testing
Agile/Scrum
WebSocket
Go
Rust
Scala
PHP
Haskell
Python
Java
JavaScript
TypeScript
C/C++
SQL
Flask
FastAPI
Spring Boot
Node.js
Vue.js
Git
REST APIs
gRPC
System Design
Microservices
Design Patterns
Data Structures
Testing
Agile/Scrum
WebSocket
Go
Rust
Scala
PHP
Haskell
Apache Spark
Apache Kafka
Apache Airflow
Pandas
dbt
Databricks
ETL Pipelines
Snowflake
BigQuery
Stream Processing
Data Warehousing
Apache Flink
Delta Lake
Data Governance
Data Catalog
Great Expectations
Apache Hive
AWS Glue
Data Lakehouse
NumPy
Matplotlib
Seaborn
Jupyter
Google Colab
PyAgrum
Anaconda
Apache Spark
Apache Kafka
Apache Airflow
Pandas
dbt
Databricks
ETL Pipelines
Snowflake
BigQuery
Stream Processing
Data Warehousing
Apache Flink
Delta Lake
Data Governance
Data Catalog
Great Expectations
Apache Hive
AWS Glue
Data Lakehouse
NumPy
Matplotlib
Seaborn
Jupyter
Google Colab
PyAgrum
Anaconda
Apache Spark
Apache Kafka
Apache Airflow
Pandas
dbt
Databricks
ETL Pipelines
Snowflake
BigQuery
Stream Processing
Data Warehousing
Apache Flink
Delta Lake
Data Governance
Data Catalog
Great Expectations
Apache Hive
AWS Glue
Data Lakehouse
NumPy
Matplotlib
Seaborn
Jupyter
Google Colab
PyAgrum
Anaconda
Apache Spark
Apache Kafka
Apache Airflow
Pandas
dbt
Databricks
ETL Pipelines
Snowflake
BigQuery
Stream Processing
Data Warehousing
Apache Flink
Delta Lake
Data Governance
Data Catalog
Great Expectations
Apache Hive
AWS Glue
Data Lakehouse
NumPy
Matplotlib
Seaborn
Jupyter
Google Colab
PyAgrum
Anaconda
Apache Spark
Apache Kafka
Apache Airflow
Pandas
dbt
Databricks
ETL Pipelines
Snowflake
BigQuery
Stream Processing
Data Warehousing
Apache Flink
Delta Lake
Data Governance
Data Catalog
Great Expectations
Apache Hive
AWS Glue
Data Lakehouse
NumPy
Matplotlib
Seaborn
Jupyter
Google Colab
PyAgrum
Anaconda
// most experienced areas
Python95%
Docker94%
SQL92%
AWS/Cloud90%
Linux89%
Git/CI/CD88%

Projects

A selection of featured work: browse the directory tree and select a project to inspect its details.

workspace
~/src/projects
llm-engineering/workflow-assistant.md

Workflow Assistant RAG

Built a system to solve the pain of manually writing complex JSON configurations for workflow automation tools. The friction of looking up schema definitions and debugging missing commas made me realize we needed a natural language interface: just say "send an email when task duration exceeds 2 hours" and get valid, ready-to-use JSON.

The architecture intentionally avoids heavy vector databases in favor of a lightweight TF-IDF retriever using scikit-learn. For code generation, I found that exact keyword matching for field names and enum values often yields better context than semantic similarity. The retriever identifies the three most relevant reference configs, which surround the user's prompt. Reliability is enforced by passing the LLM's output through a strict `jsonschema` validator. If the generated JSON fails validation (missing required fields or wrong types), the system catches it immediately rather than shipping broken config to production.

This project taught me that for specialized code tasks, a rigid validation loop is far more important than a smarter model. Grounding the LLM in valid examples and constraining it with a schema turned a localized problem into a reliable utility.

// technologies

Education

A glimpse into my journey and the milestones that shaped who I am today.

Experience

~/career/experience
$experience --listpress ⏎ to run
⏎ run command • mouse supported

Browse my work history through this interactive terminal. Tap directly on any position to view details about each role.

Select a position to see my responsibilities, achievements, and the impact I made. When you're done exploring a role, press again to return to the list, or select "exit" to reset the terminal and start fresh.

def transform(df):spark.read.parquet()SELECT * FROMGROUP BY datepipeline.fit(X)model.predict()async def fetch():await client.get()return DataFrame.map(lambda x:JOIN ON id =ORDER BY DESCimport pandasfrom sklearnclass Pipeline:yield batch@dataclasstyping.Dict

Contact

Let's build something great together.

I enjoy working on interesting projects and teaming up with people who care about what they build. Whether it's diving into a complex technical challenge, prototyping a new product idea, or improving an existing system, I like being part of teams where everyone brings something to the table. If you're looking for someone who communicates well and enjoys solving problems together, I'd be happy to chat.

You can download a trimmed version of my CV (without sensitive personal details). For the full version, feel free to contact me directly.

© 2025 Andrei Fedyna. All rights reserved.