top of page
Search

💡 Energy Insights Dashboard – Visualizing the Spanish Electric Grid

  • Writer: Alvaro Mejia
    Alvaro Mejia
  • May 13, 2025
  • 3 min read

Updated: Dec 22, 2025

How we developed a functional web app in Streamlit to visualize and interact with the Spanish Electric Grid data.


Role: Data Scientist & Developer

Team Size: 3 developers

Duration: 1 month

Stack: Python · Streamlit · Pandas · SQL · Plotly · REE API · TensorFlow/Keras · Scikit-learn


▶️ App overview

Check here the Streamlit app interface.




🎯 Project Objective

The goal was to build a fully functional interactive web dashboard using Streamlit that enables users to monitor and analyze real-time and historical data from the Spanish Electric Grid via the official API provided by Red Eléctrica de España (REE). Here is the link to it: https://www.ree.es/en/datos/apidata.

The app offers deep insights into:

  • Electricity demand

  • Energy generation by source

  • Energy balance

  • Interregional and international energy exchanges

  • Forecasting of future demand


📌 Key Features

We followed SCRUM Agile methodology, working in sprints focused on delivering user-centered features. Some of the major implemented stories include:

  • 🔄 Real-time monitoring of the Spanish electric grid.

  • 📅 Filtering by time ranges (7, 14, 30 days) to analyze trends.

  • 📊 Comparison tools to explore variations in energy usage year over year.

  • 🧭 Balance analysis: visualizing the relationship between demand and supply.

  • 🌍 Choropleth map showing energy exchanges with other countries.

  • Generation analysis by source (renewable vs. non-renewable).

  • 📈 Time-series demand visualization with filters for historical insights.

  • 🔍 Outlier detection in demand data using Tukey’s Fence.

  • 🔮 Machine learning model for demand forecasting, including:

    • Metrics visualization (RMSE, MAE)

    • Model methodology explanation

    • Interactive explanation of predictions

  • 🧰 Advanced filtering & search capabilities in data tables.

  • 📦 Automated data updates from the REE API using ETL scripts.


🧱 Technical Implementation

  • API Integration: We developed robust scripts to extract data from REE endpoints: demand, generation, exchange, and balance.

  • Data cleaning & modeling: Unified data format, added timestamps and unique IDs to support updates.

  • Database: MySQL for structured storage; populated via scripts.

  • Streamlit Frontend:

    • Modular pages (Landing, Dashboard, Machine Learning, About)

    • Integrated with .env and st.secrets for secure credentials.

  • Visualization Tools: Used Plotly and Streamlit’s native charting for interactive graphs and maps.

  • Feature engineering: Ensure data consistency when handed over to the ML models:

    • Circular encoding for the days of the week and the months, so the models can understand better the cyclical nature of the data.

    • Add a new binary variable to consider if a day is holiday or weekend.

    • Data normalization using MinMaxScaler from Scikit-learn to standardize the data distribution for the models.

  • ML Integration: Built a demand prediction model with historical features and time-based variables using:

    • Simple RNN (Recurrent Neural Network).

    • LSTM (Long-Short Term Memory.

    • GRU (Gated Recurrent Unit).

    • Prophet


📊 Key Visualizations

  1. Time-series graph: National electricity demand over selectable time ranges.

  2. Choropleth map: Energy exchange per country and total national energy exchanges.

  3. Source breakdown: Pie charts of renewable vs non-renewable generation.

  4. Year-over-year comparison: Median, mean, min, and max, as well as time series.

  5. Outlier analysis: Histograms and annual trend anomalies.

  6. Forecast dashboard: Predicted demand vs actual with model performance.


🛠 Challenges & Solutions

Several challenges had appeared during the whole projects, but all of them were solved with effective communication between the team members. Some of them were:

  • API data amount → We figured out which data were relevant for our application and models by carefully reasoning and team work.

  • Data fetching anomalies → The data were very consistent, but the structure of the responses changed by the end of the project, so the ETL process was reviewed.

  • Data encoding for cyclical patterns. The data have a clearly, well-known cyclical behavior. Trying to hand over all of this information to the ML models was solved by using a circular encoding.

  • Streamlit display → Organized views to make it as much user-friendly as possible.


👣 Improvements & Next steps

  • App performance. The application still experiences some latency, so the structure should be reviewed to make it faster and more efficient.

  • Database integration and ETL process. Improve the ETL process to integrate a Supabase DB and deploy an open app with real time data.

  • Improve ML models. Some of the neural networks models metrics are quite good, like Prophet and GRU, which capture much better the information in the long term. In any case, they can always be reviewed and updated for better performance.

 
 
 

Comments


Alvaro Mejia

Data Scientist | Python, SQL | Machine Learning & Big Data Enthusiast

  • alt.text.label.LinkedIn
  • GitHub
  • Youtube

©2024 by Alvaro Mejia

bottom of page