This repository is a beginner-friendly TensorFlow 2 introduction that demonstrates how a neural network can learn a simple linear relationship from data.
The notebook uses the equation:
y = 2.5x + 3
to show the difference between traditional programming and machine learning, and then builds a tiny TensorFlow model to learn that relationship automatically.
- Project Overview
- Files in This Repository
- What the Notebook Covers
- Visual Insight
- Code Example
- Why This Project Matters
- How to Run
- Future Improvements
- Author
The goal of this project is to explain the core idea of machine learning with a very simple example.
Instead of hardcoding the equation directly, the notebook:
- prepares example input-output pairs
- creates a neural network with one dense layer
- compiles the model
- trains it over multiple epochs
- visualizes the training loss
- uses the trained model to make predictions
This makes the repo a nice introduction to:
- TensorFlow 2
- Keras Sequential models
- supervised learning
- loss minimization
- prediction with trained models
MLWithTensorflow2.ipynb: main notebookimages/tensorflow_learning_target.svg: visual created from the notebook’s learning targetREADME.md: project documentation
The notebook is structured as a short learning walkthrough:
It starts by showing how the equation can be solved directly with standard code.
It then reframes the same problem as a learning task:
- give the model example values of
x - provide matching outputs
y - let the model learn the mapping
The notebook creates a simple dense layer:
- one input
- one output
This is the smallest useful neural network for learning a straight-line relationship.
The model is compiled with:
- loss:
mean_squared_error - optimizer:
Adam(0.1)
and then trained for 100 epochs.
Finally, the notebook:
- plots training loss
- tests predictions on new values
This visual shows the exact input-output relationship used in the notebook. It helps make the project immediately understandable on GitHub before opening the notebook.
The key idea is simple:
- blue points represent sample training data
- the orange line represents the pattern the model is trying to learn
This is the central modeling block from the notebook:
import tensorflow as tf
import numpy as np
x = np.array([-10, -5, 0, 2, 100], dtype=float)
y = np.array([-22, -9.5, 3, 8, 253], dtype=float)
layer = tf.keras.layers.Dense(units=1, input_shape=[1])
model = tf.keras.Sequential([layer])
model.compile(
loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1)
)
train = model.fit(x, y, epochs=100, verbose=False)
prediction = model.predict([50])This example shows the full beginner workflow:
- define training data
- create a layer
- build a model
- compile it
- fit it
- predict on a new value
This project is useful because it explains TensorFlow through a very approachable example rather than a large complex dataset.
It demonstrates:
- the difference between explicit programming and machine learning
- how TensorFlow models are built with Keras
- how a model learns from examples
- how loss and optimization fit into training
- how predictions are made after fitting
It works especially well as a portfolio project because it shows that you can teach and implement machine learning concepts clearly.
- Clone the repository.
- Open the project folder.
- Install the required libraries:
pip install tensorflow numpy matplotlib jupyter- Launch Jupyter Notebook:
jupyter notebook- Open
MLWithTensorflow2.ipynband run the cells in order.
You can also open it directly in Colab: Open in Colab
This project could be improved further by:
- exporting the notebook’s training-loss plot as a repo image
- adding a second example with a non-linear problem
- introducing train/test split concepts
- adding comments for each TensorFlow step in the notebook
- including a short “TensorFlow basics” section for first-time learners
Divya Thakur
- GitHub: DivyaThakur24
- LinkedIn: divya-thakurr
- Portfolio: divyathakur24.github.io/DivyaThakurPortfolio