How to Optimise Indexing for High-Performance Databases

Want faster database queries? Indexing is the key. Properly implemented, indexing can reduce query times from seconds to milliseconds, saving costs and improving user experience. However, it’s not just about adding indexes - you need the right ones, maintained effectively, without overloading your system.
Here’s what you’ll learn:
- How indexes work and why they matter.
- Types of indexes: single-column, composite, clustered, unique, and filtered.
- Choosing the right columns for indexing, focusing on high-cardinality and frequently queried fields.
- Avoiding common mistakes like over-indexing or neglecting maintenance.
- Tools and strategies for monitoring, maintaining, and improving index performance.
Quick takeaway: Indexing is a balancing act. It speeds up reads but can slow down writes if overused. Focus on critical queries, monitor performance, and adjust as your data grows. Whether you’re running a startup or managing enterprise systems, indexing is the foundation of a fast, efficient database.
Database Indexing: Boost Query Performance with Optimal Design!
Database Indexing Basics
To get a handle on database indexing, it helps to start with its main purpose and how it works in your database system. Indexes are the secret weapon for speeding up data retrieval, turning what could be slow, time-consuming queries into fast and efficient operations.
What Database Indexes Do
Think of database indexes as shortcuts to your data. Instead of combing through every row in a table, they provide a direct route to the information you're after. It's like having an express lane at the supermarket - it saves you time by skipping the long queues.
When you create an index on a column, the database builds a separate structure that links values to their locations in the table. This means the database engine can quickly find specific rows without wasting time on irrelevant data. For instance, if you're searching for a customer with a specific email address, an index on the email column can take you straight to that record, skipping over the rest.
But there's a catch. Indexes require extra storage space and need updating whenever the data changes. This can slow down operations like inserts, updates, and deletes.
Main Index Types
Now that you know what indexes do, let’s dive into the different types and when to use them.
Different kinds of indexes are built for different scenarios, so picking the right one depends on your data and how you query it.
- Single-column indexes: These are the simplest type, focused on individual columns that often appear in WHERE clauses or JOIN conditions. They’re great for queries that filter on a single field, like finding all orders from a specific date.
- Composite indexes: These cover multiple columns, making them ideal for queries that filter on several fields at once. The order of columns in a composite index is crucial - it works best for queries filtering on the leftmost columns but won’t be as effective for columns further down the order.
- Clustered indexes: These determine how data is physically stored in the table. Since data can only be sorted one way, each table can have just one clustered index. They’re especially useful for range queries and sorting.
- Unique indexes: These prevent duplicate values while also offering the performance perks of regular indexes. For example, when you create a primary key, a unique index is automatically generated to enforce data integrity.
- Filtered indexes: These are designed for specific subsets of data. They’re smaller and more efficient for queries targeting particular conditions. For example, you could create a filtered index for active customers, leaving out archived records.
Index Type | Best Use Case | Key Benefit |
---|---|---|
Single-column | Simple WHERE clauses | Quick lookups on individual fields |
Composite | Multi-column filtering | Handles complex queries efficiently |
Clustered | Range queries, sorting | Optimises data storage and access |
Unique | Primary keys, unique values | Ensures data integrity and speed |
Filtered | Specific data subsets | Smaller, easier-to-maintain indexes |
How Indexes Work with Query Optimisers
Understanding how indexes interact with query optimisers is key to making the most of them.
Query optimisers are like the brains behind efficient query execution. They analyse your SQL query and decide the best way to run it, relying heavily on indexes to make smart choices. When you run a query, the optimiser evaluates multiple execution plans and picks the one that will cost the least in terms of resources like CPU and disk I/O.
Indexes play a huge role in cutting down these costs by allowing direct access to the data. The optimiser checks available indexes to see if using them will result in fewer disk reads and faster query times.
Statistics are critical in this process. Databases track things like data distribution, cardinality (how unique the values are), and index selectivity. These stats help the optimiser estimate how many rows a query will return. The more selective an index is, the better - it narrows down the data that needs to be processed.
But not all indexes are created equal. Fragmented indexes with low page density can increase I/O costs, which might lead the optimiser to skip them in favour of other execution paths. That’s why keeping your indexes well-maintained is so important for consistent performance.
You can see how this works using tools built into your database system. For example, MySQL’s EXPLAIN command or Microsoft SQL Server’s execution plan feature shows how the optimiser uses your indexes. These tools are invaluable for fine-tuning your indexing strategy and spotting areas for improvement.
These basics set the stage for more advanced discussions on selecting, maintaining, and improving database indexes in later sections.
Selecting the Right Indexes
Choosing the right indexes can drastically improve query execution times by avoiding full table scans. When a database is well-indexed, it can efficiently filter and sort data. On the flip side, poor indexing can lead to slower queries and unnecessary resource usage. The goal is to strike a balance between performance improvements and the maintenance costs of adding indexes.
Finding Important Queries and Columns
Start by examining your database's query logs and performance statistics to identify which queries run most frequently and consume the most resources. These insights will help you pinpoint the areas where indexing can have the biggest impact.
Focus on the columns involved in key operations. Columns used in WHERE clauses, JOIN conditions, ORDER BY statements, and GROUP BY operations are prime candidates for indexing. These operations often play a central role in query performance and can benefit significantly from efficient sorting and filtering.
When evaluating columns for indexing, consider two important factors: cardinality and selectivity. Cardinality refers to the uniqueness of values in a column (e.g., a customer ID column usually has high cardinality). Selectivity measures how well a column narrows down result sets. Columns with high cardinality and good selectivity are ideal for indexing as they can quickly reduce the number of rows the database needs to process.
Leverage built-in database tools to gather this information. For example, PostgreSQL offers the pg_stat_statements extension, and SQL Server provides the Query Store. Both tools can help identify which queries could benefit from improved indexing.
Don’t forget about foreign key columns, as they are frequently used in JOIN operations and often require indexing to optimise performance.
Preventing Too Many Indexes
While indexes are powerful tools for speeding up queries, having too many can backfire. Each index requires additional storage, slows down write operations, and increases maintenance complexity.
Write operations, such as inserts, updates, and deletes, are particularly affected by excessive indexing. Every time data changes, the database must update all related indexes, which can become a bottleneck, especially in systems with high transaction volumes.
Too many indexes can also confuse the query optimiser. With more options to evaluate, it may take longer to decide on an execution plan, potentially negating any performance benefits.
Indexes also consume disk space based on column size and the number of rows in the table. For large datasets, redundant or unnecessary indexes can use up significant storage that could be better allocated elsewhere.
A practical starting point is to limit each table to three to five well-chosen indexes. Add more only when specific performance issues justify the extra overhead. Regularly auditing your indexes is also crucial. Many database systems provide usage statistics, making it easier to identify and remove unused indexes. This step can free up resources and keep your system running efficiently.
Index Selection Examples
Here are some examples of how different types of indexes can address specific performance needs:
Single-column indexes are great for simple lookups. For instance, indexing the email column in a customer table can turn a slow full table scan into a quick lookup.
Composite indexes work well when filtering involves multiple columns. Consider an e-commerce order table - creating a composite index on (customer_id, order_date, status) can speed up queries filtering by customer_id alone or by customer_id and order_date together. However, this index won’t be as effective for queries filtering only by status.
Filtered indexes (or partial indexes) are ideal for focusing on a subset of data. For example, in a support ticket system with millions of tickets, only a small fraction might be open. A filtered index on status WHERE status = 'open' allows for quick retrieval of active tickets without the overhead of indexing the entire table.
Scenario | Index Type | Columns | Benefit |
---|---|---|---|
User login lookup | Single-column | Fast user authentication | |
Order history queries | Composite | (customer_id, order_date) | Quick customer order retrieval |
Active inventory search | Filtered | product_name WHERE in_stock > 0 | Rapid lookup of available products |
Financial reporting | Composite | (account_id, transaction_date, amount) | Efficient multi-dimensional analysis |
The secret to effective indexing lies in aligning your strategy with your database’s query patterns. Monitor performance regularly and make adjustments as needed to maintain a balance between speed and resource efficiency.
For expert guidance, consider reaching out to professionals on Talentblocks (https://talentblocks.io).
Index Maintenance and Monitoring
Keeping your database indexes in good shape is crucial as your data and query patterns shift over time. Without consistent maintenance and monitoring, even the fastest queries can slow to a crawl, frustrating users and overloading your system. Here's a breakdown of the key tasks and tools to keep your indexes performing at their best.
Regular Index Maintenance Tasks
Index maintenance isn't a one-size-fits-all solution. While index rebuilds are often thought to boost performance, this improvement is frequently due to updated statistics rather than the rebuild itself. Instead of jumping straight to a rebuild, consider updating statistics directly - it’s more resource-efficient and often delivers the same benefits.
Before implementing any maintenance tasks, measure their actual impact on your workload. Use tools like Query Store to perform A/B testing and determine if reorganising or rebuilding indexes truly improves query performance in your specific environment.
Start with statistics updates. They’re your go-to for quick, resource-friendly performance gains.
Unused indexes can be a hidden drain on resources, increasing write overhead, storage costs, and complicating query optimisation. Regular audits can help identify and remove these resource hogs. Most database systems, like SQL Server and PostgreSQL, track index usage, making it easier to spot indexes that have gone unused for months.
If you're working with cloud-based solutions like Azure SQL Database or Azure SQL Managed Instance, be cautious with index maintenance. Perform maintenance only when it's clearly needed, as unnecessary tasks can waste resources and increase costs in pay-per-use models.
Tools for Monitoring Index Performance
Monitoring tools are your best friend when it comes to understanding how your indexes are performing. Query execution plans are a great starting point. They reveal whether your queries are using indexes efficiently or resorting to costly table scans. Many modern database systems offer graphical tools to visualise execution plans, making it easier to spot inefficiencies and identify missing indexes.
For deeper insights, use database-specific tools:
- PostgreSQL: The pg_stat_user_indexes view shows how often each index is accessed.
- SQL Server: Query Store provides historical data and highlights queries that could benefit from better indexing.
- Oracle: AWR (Automatic Workload Repository) reports detail index usage and pinpoint performance bottlenecks.
Keep an eye on index fragmentation metrics, but don’t rely on them alone. High fragmentation doesn’t always equal poor performance. Pair fragmentation data with query response times to decide when maintenance is truly necessary. Some databases, especially those handling read-heavy workloads, perform just fine with moderate fragmentation.
Wait statistics and performance counters can uncover bottlenecks caused by indexing issues. For example, excessive lock waits during writes might signal over-indexing, while high disk I/O could point to missing or inefficient indexes.
While third-party monitoring tools offer user-friendly dashboards and alerts, built-in database tools often provide the most detailed and accurate performance data.
Updating Index Strategies for Changing Workloads
Indexing isn’t a "set it and forget it" task. As your workloads evolve, so should your indexing strategies. Regular reviews ensure your indexes keep up with changing data patterns and query demands.
Seasonal patterns can significantly impact query behaviour. For example, an e-commerce site might need different indexing strategies during peak shopping seasons compared to quieter periods. Promotional events often lead to a surge in specific types of queries that require tailored indexing.
Application changes also drive new indexing requirements. A social media platform might begin with simple user lookups but later shift to more complex analytics queries, necessitating entirely new indexing approaches.
Data growth is another factor. An index that worked perfectly on a small table may become a bottleneck as the table grows. Conversely, some indexes gain value as data volume increases, helping to avoid costly full table scans.
To stay ahead of these changes, schedule regular reviews. High-transaction environments might need monthly checks, while more stable systems can get by with quarterly assessments. During these reviews, analyse query patterns, identify new slow queries, and monitor changes in data distribution that could affect index performance. This proactive approach ensures your database remains efficient and responsive as your needs evolve.
Common Indexing Mistakes to Avoid
Even seasoned database administrators can stumble into indexing pitfalls that end up harming performance rather than improving it. These missteps often come from a desire to speed up queries but can lead to bottlenecks that slow down your entire system. Knowing what to avoid can save you hours of troubleshooting and keep your database in top shape.
Too Many Indexes and Poor Column Choices
Adding too many indexes or choosing the wrong columns for them can backfire, especially during write operations. Each index you create adds extra work for the database whenever you insert, update, or delete data. This can turn what should be a fast write operation into a sluggish process.
Over-indexing is a common issue in write-heavy environments. If your system handles thousands of transactions per minute, unnecessary indexes can cause a noticeable slowdown. Every write operation has to update all relevant indexes, consuming valuable resources.
Indexing low-cardinality columns, such as boolean or status fields, usually doesn’t help much. For instance, if you create an index on a "status" column with only two values like "active" or "inactive", it won’t narrow down results effectively. If 80% of your records are "active", the database still has to scan most of the table, even with the index. Instead, focus on columns with high cardinality - those with many unique values, like customer IDs or timestamps.
Redundant indexes waste resources as well. For example, if you have separate indexes on "customer_id" and "order_date", and also a composite index covering both, you’re likely maintaining more indexes than necessary. Often, the composite index can handle queries involving just the first column.
To avoid these problems, audit your indexes regularly. Check for indexes that haven’t been used in months, identify low-cardinality columns, and consolidate overlapping indexes. Most database systems provide tools to help you see which indexes are actually being used and which are just eating up resources.
Skipping Index Maintenance
Neglecting index maintenance can lead to performance degradation over time. Fragmented indexes are a common issue that slows down queries, but many teams only notice this when users start complaining about delays.
When you skip maintenance, index statistics become outdated. These statistics help the query optimiser decide the best way to execute your queries. Without accurate statistics, the optimiser might take inefficient paths, causing slow queries even when appropriate indexes exist. This issue grows worse as your data changes and expands.
Unused indexes can also pile up without regular reviews, taking up storage and slowing down write operations without contributing to query performance.
To keep things running smoothly, schedule regular maintenance tasks. Update index statistics, reorganise fragmented indexes, and review usage patterns periodically. Don’t rely solely on automated scripts - they often use generic settings that might not fit your system’s specific needs.
Proactive monitoring is key. Set up alerts to flag fragmented indexes or outdated statistics before they impact performance. Many database platforms include built-in monitoring tools to help you track these metrics and take action as needed.
Managing Read vs Write Performance
Once you've addressed index misuse and maintenance, the next challenge is balancing read and write performance. This is often one of the toughest aspects of database indexing. Read-heavy systems benefit from more extensive indexing, while write-heavy systems require a more restrained approach. Striking the right balance is crucial to avoid performance headaches.
Write-heavy workloads suffer when over-indexed. Every insert, update, or delete operation has to update all relevant indexes, which can significantly slow down transactions. If your application handles high volumes of real-time data, like financial transactions or sensor readings, excessive indexing can become a major bottleneck.
Read-heavy systems, on the other hand, demand fast query responses, which often require multiple indexes to support various search patterns. However, even in these systems, write operations still occur, and too many indexes can slow them down.
The solution lies in understanding your workload patterns. Analyse your query logs to identify the operations that are most frequent and critical to performance. Focus your indexing efforts on these high-priority queries instead of trying to optimise for every possible scenario.
Composite indexes are a great way to strike this balance. A well-designed composite index can support multiple query patterns while reducing the total number of indexes you need to maintain, lowering write overhead without sacrificing read performance.
For specific use cases, consider partial indexes. If most of your queries target recent records or active accounts, a partial index can optimise these common queries without the overhead of indexing the entire table.
Another strategy is time-based partitioning. By dividing large tables into partitions based on date ranges, you can apply different indexing strategies to each partition. Recent data that sees frequent updates can have fewer indexes, while older, read-only partitions can be more heavily indexed for analytical queries.
Regularly testing your indexing approach ensures it adapts to your system’s evolving needs.
Practical Indexing Implementation
Building on the strategies and challenges discussed earlier, let's dive into how to put these concepts into action. Effective indexing involves understanding the syntax of each platform, rigorously testing your changes, and leveraging the tools provided by the database. Each system has unique capabilities that can either elevate or hinder your performance improvements.
Creating Indexes on Different Database Platforms
Each database platform comes with its own set of indexing features and syntax. Understanding how to use these effectively is key to improving performance. Let's explore some examples for SQL Server, PostgreSQL, and cloud-based data warehouses.
SQL Server offers powerful indexing options that can significantly enhance query performance. While creating a non-clustered index is straightforward, the real advantage lies in mastering advanced features. For instance, with composite indexes, the order of columns is critical - place the most selective columns first for maximum efficiency.
CREATE NONCLUSTERED INDEX IX_Orders_CustomerDate ON Orders (CustomerID, OrderDate) INCLUDE (TotalAmount, Status);
The INCLUDE clause allows you to add non-key columns to the index's leaf level. This turns covering indexes into efficient tools for speeding up queries, without adding the overhead of maintaining additional key columns.
PostgreSQL offers even more flexibility with features like partial and expression-based indexes, which are ideal for optimising specific query patterns. Partial indexes work well when you frequently query subsets of data, such as active users or recent transactions.
CREATE INDEX idx_active_customers ON customers (last_login_date) WHERE status = 'active' AND last_login_date > '2024-01-01';
Expression indexes, on the other hand, shine when queries involve functions or calculations. For example, if you're often searching for customers by the first three characters of their surname, an expression index can save time by bypassing full table scans.
CREATE INDEX idx_customer_surname_prefix ON customers (UPPER(LEFT(surname, 3)));
Cloud-based data warehouses like Amazon Redshift and Google BigQuery require different strategies due to their columnar storage and distributed architecture. In Redshift, you can optimise performance with DISTKEY and SORTKEY to manage how data is distributed and ordered across the cluster.
CREATE TABLE sales_data ( transaction_id BIGINT, customer_id INT, transaction_date DATE, amount DECIMAL(10,2) ) DISTKEY(customer_id) SORTKEY(transaction_date, customer_id);
Distribution keys control how data is spread across nodes, while sort keys determine the physical order within each node. Properly configuring these can cut query times dramatically, especially for large datasets.
Testing Index Performance
Creating indexes is just the first step - you need to ensure they actually improve performance. Tools like query execution plans and timing utilities are essential for validating your indexing decisions and spotting areas for improvement.
In PostgreSQL, the EXPLAIN ANALYZE command is invaluable for understanding how queries are executed and whether your indexes are being utilised.
EXPLAIN ANALYZE SELECT customer_id, SUM(amount) FROM orders WHERE order_date >= '2024-01-01' GROUP BY customer_id;
Look for terms like "Index Scan" versus "Seq Scan" in the output. Sequential scans indicate your indexes aren’t being used, while index scans confirm successful optimisation. However, keep an eye on the actual execution times - a sequential scan might still be faster for small tables or queries that retrieve a large percentage of rows.
For SQL Server, tools like the Query Store and Database Engine Tuning Advisor can help track performance trends and suggest index improvements. That said, use these recommendations as a guide, not a rule, and always validate them against your specific workload.
Cloud platforms also offer useful monitoring tools. Amazon Redshift can flag slow queries and suggest changes, while Google BigQuery provides insights into slot usage and query execution patterns.
Lastly, don’t skip load testing with realistic data volumes. A query that performs well on a small test table might struggle with a production dataset if indexes aren’t designed properly. Establish baseline performance metrics, implement your changes, and then compare results to verify improvements. Beyond query speed, monitor storage usage to avoid introducing new bottlenecks.
Using Built-in Database Features
After testing and validating your indexing strategy, you can take it a step further with advanced database features. Modern systems offer tools like clustering, partitioning, and cloud-specific mechanisms that can enhance your indexing performance.
Table clustering in PostgreSQL is a great example. It physically rearranges table data to match an index, reducing the number of disk pages read during range queries. This is particularly effective for time-series data.
CLUSTER orders USING idx_orders_date;
After clustering, range queries benefit from having related records stored closer together, which reduces I/O operations. However, clustering is a one-off operation and doesn’t automatically maintain itself, so plan for periodic re-clustering if your data is frequently updated.
Partitioning is another powerful feature for handling large tables. PostgreSQL supports declarative partitioning, which allows queries to be routed automatically to the right partition. Each partition can have its own optimised indexes.
CREATE TABLE sales_2024 PARTITION OF sales FOR VALUES FROM ('2024-01-01') TO ('2025-01-01'); CREATE INDEX idx_sales_2024_customer ON sales_2024 (customer_id, sale_date);
This setup keeps indexes smaller and more efficient while still supporting queries across the entire dataset when needed.
For cloud environments, specific features can make a big difference. For instance, Amazon Redshift’s automatic table optimisation adjusts sort keys based on query patterns, while Azure Synapse Analytics uses adaptive caching to speed up frequently accessed data.
Columnstore indexes in SQL Server and Azure SQL Database are particularly effective for analytical workloads. These indexes store data by column instead of by row, enabling significant compression and faster aggregation queries.
CREATE CLUSTERED COLUMNSTORE INDEX CCI_SalesAnalytics ON SalesAnalytics;
For hybrid workloads, you can combine non-clustered columnstore indexes with traditional row-based indexes on the same table, balancing transactional and analytical needs.
Some cloud platforms even offer automated index management. For example, Azure SQL Database can create or drop indexes based on query patterns. While convenient, these automated changes should be monitored to ensure they align with your performance goals.
Key Points for Indexing Success
Getting database indexing right means combining technical know-how with smart choices, regular upkeep, and avoiding common mistakes.
Strategic index selection is the cornerstone of a fast and efficient database. Focus on indexing queries that have the most impact by analysing query patterns regularly. This approach builds on the basic indexing principles already covered.
Regular maintenance is what elevates a good indexing plan to a great one. As data changes, database statistics can become outdated, leading to poor query optimisation. Schedule regular maintenance to update these statistics, rebuild fragmented indexes, and remove unused ones that take up storage without offering any benefit.
Avoiding common pitfalls is about finding the right balance between read and write operations. Too many indexes can slow down INSERT, UPDATE, and DELETE actions, while too few can cause queries to scan entire tables unnecessarily. Monitor query performance and maintenance overhead to maintain this balance effectively.
It's also crucial to understand the features your database platform offers. For example, PostgreSQL supports partial indexes, SQL Server provides columnstore indexes, and Redshift uses distribution keys. Leveraging these platform-specific tools can significantly boost indexing performance.
Testing and validation are essential to ensure your indexing choices actually improve performance. Use execution plans to confirm that indexes are being used as expected, and run load tests with realistic data to see how your database performs under production-like conditions.
If your team lacks the expertise to fine-tune indexing, consider bringing in professionals. Talentblocks connects businesses with seasoned data engineers and solution architects who specialise in database performance. These experts can implement indexing strategies tailored to your workload, saving you the effort of building and maintaining in-house expertise.
FAQs
How do I choose the best columns to index for improving database performance?
To improve query performance, pay close attention to columns with high selectivity - those that have a wide range of unique values. These columns make indexes much more efficient. Focus on indexing columns that are often used in WHERE clauses, JOIN conditions, or in ORDER BY and GROUP BY operations, as these are the areas where indexes can make the biggest difference.
On the other hand, avoid creating indexes on columns with low cardinality, like those containing boolean values or a small set of distinct categories. These typically offer little to no performance improvement. It's also a good idea to keep track of query performance and regularly review slow-running queries to identify which columns are frequently involved. Using database monitoring tools can be especially helpful in determining where indexing will have the most noticeable impact.
Lastly, don't forget to periodically review and maintain your indexes. As your database grows and query patterns shift, regular maintenance ensures your indexes stay relevant and effective.
What are the risks of over-indexing a database, and how can you prevent them while ensuring efficient read and write operations?
Overloading a database with too many indexes can create several problems. You might notice increased storage requirements, slower write operations (like inserts, updates, and deletes), and a spike in CPU and memory usage because of the extra work involved in managing multiple indexes. Another common issue is index bloat, which can drag down the database's overall performance and efficiency.
To sidestep these pitfalls, focus on indexing only the columns that are frequently used in search conditions or joins. Make it a habit to periodically review and fine-tune your existing indexes to ensure they match your query patterns. Avoid indexing columns that are rarely queried, as this just adds unnecessary overhead. Striking the right balance between read and write efficiency is key to keeping your database running smoothly without wasting resources.
How do cloud-based databases like Amazon Redshift and Google BigQuery manage indexing differently from traditional databases, and what are the best optimisation strategies for them?
Cloud-based databases like Amazon Redshift and Google BigQuery handle indexing quite differently from traditional databases. Instead of using standard indexing methods, they rely on columnar storage and distributed architectures to boost performance.
In Amazon Redshift, features such as sort keys and zone maps are used to streamline queries by limiting the amount of data that needs to be scanned. Meanwhile, Google BigQuery replaces conventional indexing with tools like table partitioning and clustering, which help to lower scan costs and speed up query execution. BigQuery also leverages its Dremel technology to enable rapid querying without the need for traditional indexing.
To get the best performance from these platforms, consider these approaches:
- Amazon Redshift: Select the right sort keys and group similar data together to improve query efficiency.
- Google BigQuery: Implement table partitioning to divide data by date or other key fields, and use clustering to enhance query speed.
Customising your approach to align with each platform's architecture can lead to noticeable gains in both performance and cost savings.