About Splunk Validated Architectures
Splunk Validated Architectures (SVAs) are trusted reference architectures for stable, efficient and repeatable Splunk software deployments. Many of Splunk's existing customers have experienced rapid adoption and expansion, leading to certain challenges as they attempt to scale. At the same time, new Splunk customers are increasingly looking for guidelines and vetted architectures to ensure that their initial deployment is built on a solid foundation. SVAs have been developed to help our customers with these growing needs.
Whether you are a new or existing Splunk customer, SVAs are helpful tools to assist you with building an environment that is easier to maintain and simpler to troubleshoot. SVAs are designed to assist you with achieving the best possible results while minimizing your total cost of ownership. Additionally, your entire Splunk Platform can be deployed in a repeatable manner to help you scale your deployment as your needs evolve over time. Validated Architectures are highly relevant to the concerns of decision makers and administrators. Enterprise architects, consultants, Splunk administrators and managed service providers should all be involved in the SVA selection process.
SVAs offer topology options that consider a wide array of organizational requirements, so you can easily understand and find a topology that is right for your requirements. The Splunk Validated Architectures selection process will help you match your specific requirements to the topology that best meets your organization's needs. If you are new to Splunk, we recommend implementing a Validated Architecture for your initial deployment. If you are an existing customer, we recommend that you explore the option of aligning with a Validated Architecture topology. Unless you have unique requirements that make it necessary to build a custom architecture, it is very likely that a Splunk Validated Architecture will fulfill your requirements while remaining cost effective.
It is always recommended that you involve Splunk or a trusted Splunk partner to ensure that the recommendations in this document meet your needs.
If you need assistance implementing a Splunk Validated Architecture, contact Splunk Professional Services.
What's new
Welcome to the new home of the Splunk Validated Architectures where you will find a growing catalog of modules that contain best practice architectures and approaches. Along with a new home and format, the SVA program has been relaunched with a renewed focus on making you successful in your Splunk journey.
In addition to the base Splunk Validated Architecture modules, some further Applied SVAs are now available. These are designed to build upon the SVAs to provide more complex examples of deployments or use cases using one or more SVAs. These can be used to further assist the building of systems based on best practices, and ensuring that they are implemented using the core Validated Architectures.
The SVA relaunch seeks to deliver all the benefits of the original SVAs and improve upon them in the following ways.
- Modular: Topics are self contained in modules that can be used on their own or pieced together to satisfy more complex requirements and use cases.
- Community Oriented: Docs feedback is now captured from the broader Splunk community for review and incorporation to help keep you up to date with the latest guidance.
- Expanded: New modules, architectures and approaches to help scale your Splunk deployment..
- Applied designs: New Applied SVAs giving broader solution focus encompassing one or more SVAs.
Reasons to use Splunk Validated Architectures
Implementing a validated architecture will empower you to design and deploy Splunk software more confidently. SVAs will help you solve some of the most common challenges that organizations face, including:
- Performance: Organizations want to see improvements in performance and stability.
- Complexity: Organizations sometimes run into the pitfalls of custom-built deployments, especially when they have grown too rapidly or organically. In such cases, unnecessary complexity may have been introduced into the environment. This complexity can become a serious barrier when attempting to scale.
- Efficiency: To derive the maximum benefits from the Splunk software deployment, organizations must improve the efficiency of operations and accelerate time to value.
- Cost: Organizations are seeking ways to reduce total cost of ownership (TCO), while fulfilling all of their requirements.
- Agility: Organizations will need to adapt to change as they scale and grow.
- Maintenance: Optimization of the environment is often necessary in order to reduce maintenance efforts.
- Scalability: Organizations must have the ability to scale efficiently and seamlessly.
- Verification: Stakeholders within the organization want the assurance that their Splunk software deployment is built on best practice.
What to expect from Splunk Validated Architectures
SVAs do not include deployment technologies or deployment sizing. The reasoning for this is as follows:
- Deployment technologies, such as operating systems and server hardware, are considered implementation choices in the context of SVAs. Different customers will have different choices, so a generalization is not easily possible.
- Deployment sizing requires an evaluation of data ingest volume, data types, search volumes and search use cases, which tend to be very customer-specific and generally have no bearing on the fundamental deployment topology. When you are ready, please reach out to Splunk for help with properly sizing your deployment based on your expected ingest and search workload profile.
Summary of current SVA Guidance
SVAs provide: | SVAs do not provide: |
---|---|
|
|
Pillars of Splunk Validated Architectures
Splunk Validated Architectures are built on the following foundational pillars. For more information on these design pillars, refer to the next section.
Availability | Performance | Scalability | Security | Manageability |
---|---|---|---|---|
The system is continuously operational and able to recover from planned and unplanned outages or disruptions. | The system can maintain an optimal level of service under varying usage patterns. | The system is designed to scale on all tiers, allowing you to handle increased workloads effectively. | The system is designed to protect data, configurations, and assets while continuing to deliver value. | The system is centrally operable and manageable across all tiers. |
These pillars are in direct support of the Platform Management & Support Service in the Splunk Center Of Excellence model.
SVA pillars explained
Pillar | Description | Primary Goals / Design Principles |
---|---|---|
Availability | The ability to be continuously operational and able to recover from planned and unplanned outages or disruptions. |
|
Performance | The ability to effectively use available resources to maintain optimal level of service under varying usage patterns. |
|
Scalability | The ability to ensure that the system is designed to scale on all tiers and handle increased workloads effectively. |
|
Security | The ability to ensure that the system is designed to protect data as well as configurations/assets while continuing to deliver value. |
|
Manageability | The ability to ensure the system is designed to be centrally operable and manageable across all tiers. |
|
Topology components
The following table shows the tiers and components of Splunk software deployments. It includes the icons used to represent each component in SVA diagrams, a description of the component, and additional notes.
Tier | Component | Icon | Description | Notes |
---|---|---|---|---|
Management | Deployment Server (DS) |
|
The deployment server manages configuration of forwarder configuration. | Should be deployed on a dedicated instance. It can be virtualized for easy failure recovery. |
License Manager (LM) |
|
The license manager is required by other Splunk software components to enable licensed features and track daily data ingest volume. | The license manager role has minimal capacity and availability requirements and can be colocated with other management functions. It can be virtualized for easy failure recovery. | |
Monitoring Console (MC) |
|
The monitoring console provides dashboards for usage and health monitoring of your environment. It also contains a number of prepackaged platform alerts that can be customized to provide notifications for operational issues. | In clustered environments, the MC can be colocated with the Master Node, in addition to the License Master and Deployment server function in non-clustered deployments. It can be virtualized for easy failure recovery. | |
Cluster Manager (CM) |
|
The cluster manager is the required coordinator for all activity in a clustered deployment. | In clusters with a large number of index buckets (high data volume/retention), the cluster manager will likely require a dedicated server to run on. It can be virtualized for easy failure recovery. | |
Search Head Cluster Deployer (SHC-D) |
|
The search head cluster deployer is needed to bootstrap a SHC and manage Splunk configuration deployed to the cluster. | The SHC-D is not a runtime component and has minimal system requirements. It can be colocated with other management roles. Note: Each SHC requires its own SHC-deployer function. It can be virtualized for easy failure recovery. | |
Search | Search Head (SH) |
|
The search head provides the UI for Splunk users and coordinates scheduled search activity. | Search heads are dedicated Splunk software instances in distributed deployments. Search heads can be virtualized for easy failure recovery, provided they are deployed with appropriate CPU and memory resources. |
Search Head Cluster (SHC) |
|
A search head cluster is a pool of at least three clustered Search Heads. It provides horizontal scalability for the search head tier and transparent user failover in case of outages. | Search head clusters require dedicated servers of ideally identical system specifications.
Search head cluster members can be virtualized for easy failure recovery, provided they are deployed with appropriate CPU and memory resources. |
|
Indexing | Indexer |
|
Indexers are the heart and soul of a Splunk deployment. They process and index incoming data and also serve as search peers to fulfill search requests initiated on the search tier. | Indexers must always be on dedicated servers in distributed or clustered deployments. In a single server deployment, the indexer will also provide the search UI and license master functions. Indexers perform best on bare metal servers or in dedicated, high-performance virtual machines, if adequate resources can be guaranteed. |
Data Collection | Forwarders and other data collection components |
|
General icon for any component involved in data collection. | This includes universal and heavy forwarders, network data inputs and other forms of data collection (HEC, Kafka, etc.) |
Splunk Connect for Syslog (SC4S) |
|
SC4S is the current best practice approach for SYSLOG data collection. | We created a dedicated icon for SC4S to reflect a fundamentally different, containerized deployment model for this data collection tier component. |