Optimizing Query Performance for Complex Views
Written by Dina Alaa, Software Engineer Ninjas squad at Kites Software.

Problem Statement
A significant performance bottleneck is encountered while retrieving data from a complex view with many joins. This view is designed to combine and present data from multiple related tables. However, as the data volume increased, the query execution time grew dramatically, resulting in slow performance and negatively affecting the user experience.
Solution Approach
To resolve the performance issue, the following approach is implemented:
- Design: A table is created to store rows from the complex view. This table acts as a snapshot, capturing the data at a particular point in time.
- Population: The snapshot table is initially filled with data that is extracted from the complex view.
- Maintenance: A trigger is created to update the snapshot table whenever rows in the dependent tables are inserted, updated, or deleted. This ensures the snapshot table remains synchronized with the underlying data changes.
Technical Implementation
Creating the Snapshot Table:
-- Create the snapshot table to store results
CREATE TABLE snapshot_table AS
SELECT * FROM complex_view;
Refreshing the Snapshot Table via Trigger:
To maintain synchronization between the snapshot table and the dependent tables, triggers are created for the following operations: insert, update, and delete. Each trigger invokes a corresponding function to update the snapshot table as necessary.
Trigger for Insert Operation
When a new row is inserted into a dependent table, the snapshot table is updated by fetching the relevant data from the view.
-- Create a trigger to update the snapshot table after insertion
CREATE FUNCTION refresh_snapshot_table_after_insertion()
RETURNS TRIGGER AS $$
BEGIN
-- Debugging: Log the NEW record
RAISE NOTICE 'NEW: %', NEW;
-- Insert the corresponding row into the snapshot table
INSERT INTO snapshot_table
SELECT *
FROM complex_view
WHERE NEW.id = complex_view.id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Apply the trigger to dependent tables
CREATE TRIGGER refresh_trigger
AFTER INSERT ON dependent_table
FOR EACH ROW EXECUTE FUNCTION refresh_snapshot_table_after_insertion();
Trigger for Update Operation
When a row in a dependent table is updated, the snapshot table is refreshed by retrieving the latest data from the view.
-- Create a trigger to update the snapshot table after update
CREATE FUNCTION refresh_snapshot_table_after_update()
RETURNS TRIGGER AS $$
BEGIN
-- Debugging: Check the value of NEW
RAISE NOTICE 'NEW: %', NEW;
-- Refresh logic here
Update snapshot_table
SET
snapshot_table.COLUMN1 = complex_view.CLOUMN1
snapshot_table.COLUMN2 = complex_view.CLOUMN2
-- Add more columns as necessary
FROM complex_view
WHERE NEW.id = complex_view.id AND snapshot_table.id = complex_view.id
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Apply the trigger to dependent tables
CREATE TRIGGER refresh_trigger
AFTER UPDATE ON dependent_table
FOR EACH ROW EXECUTE FUNCTION refresh_snapshot_table_after_update();
Trigger for Delete Operation
When a row in a dependent table is deleted, the corresponding row is removed from the snapshot table.
-- Create a trigger to update the snapshot table after delete
CREATE FUNCTION refresh_snapshot_table_after_delete()
RETURNS TRIGGER AS $$
BEGIN
-- Debugging: Check the value of OLD
RAISE NOTICE 'OLD: %', OLD;
-- Refresh logic here
DELETE FROM snapshot_table WHERE OLD.id = id;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
-- Apply the trigger to dependent tables
CREATE TRIGGER refresh_trigger
AFTER DELETE ON dependent_table
FOR EACH ROW EXECUTE FUNCTION refresh_snapshot_table_after_delete();
Key Points
- The view is the sole source of truth for refreshing snapshot data.
- Triggers ensure the snapshot table stays synchronized by directly querying the view after inserting, updating, or deleting operations on dependent tables.
- The function logic varies depending on which dependent table the trigger is applied to
- This approach simplifies maintenance and ensures consistency across all updates.
Results
The optimization led to significant improvements:
- Execution Time Before Optimization: 19 milliseconds for retrieving 2,000 records.
- Execution Time After Optimization: 0.007 milliseconds for retrieving 2,000 records, highlighting a significant enhancement in query speed.
- User Experience: The reduction in execution time resulted in much faster load times, greatly improving overall user satisfaction and system performance.
Pros
- Improved Performance: Querying the snapshot table is significantly faster than querying a complex view, resulting in better system responsiveness.
- Scalability: The solution efficiently handles increasing data volumes compared to directly querying the view.
Cons
- Increased Complexity: Managing multiple triggers (for insert, update, and delete) can be challenging as the number of dependent tables grows.
- Risk of Inconsistency: If triggers are not implemented or maintained properly, the snapshot table may become unsynchronized with the source data.
Conclusion
By implementing a snapshot table, a significant reduction in query execution time was achieved. This approach not only eliminated the performance bottleneck but also improved the system’s responsiveness.