Splunk Index Sizing Basics

Placeholder Image

You control index sizes by time, or disk size, this is simple guide on how to to that.

The default for index size is 500GB (0.5TB)

Create separate indexes, example, windows, linux etc.

Ingest the data, and over a 24 hour period see how much data is being ingested, use the SPL to see how much is coming, then tune the indexes.

You must have a data retention policy per index, otherwise the data will either just grow, your storing unnecessary data, you will require extra storage when not really required, you can’t apply RBAC per index. Do not send data to all one index not good practise.

Think like, how long would you want to keep windows and linux data for, perhaps 2 weeks, you may have other logs and you might to keep them for a 1 year, but this must come from the business as to how long to keep the data. If you don’t need it don’t ingest it or delete it when it’s old,  you also have the option to archive it.

Simple Example, in the real world the values will be much higher.


If I want to keep windows indexed data for 2 weeks – 14 x 70 = 980 MB (1GB) Size / Time = 1209600

If I want to keep linux indexed data for 2 weeks – 14 x 35 = 490 MB (0.5 GB) Size / Time = 1209600

Once the index reaches maxTotalDataSizeMB or frozenTimePeriodInSecs data is deleted, optionally frozen, most restrictive wins.

Use the SPL to check index sizes, then adjust your indexes.

| rest /services/data/indexes
| rename title AS Index_Name
| rename currentDBSizeMB AS Index_Size_MB
| eval Index_Size_GB = round(Index_Size_MB/1024,3)
| stats values(Index_Size_MB) by Index_Name, Index_Size_GB,Index_Size_MB
| sort – values(Index_Size_MB)


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s