How to migrate entire index data from one Splunk server to another Splunk server
Question:
I have a Splunk server with index data for 650k events. I want to migrate the entire data from one instance to another new instance.
I tried using a migration script with data field -27D@d but I can only migrate 50k data.
-27D@d is the point from where initial data is available.
Can you please help me here?
Here’s the code :
import splunklib.client as client
import splunklib.results as results
import json
import requests
send_string = ""
service=client.connect(host="host1", port=8089, username="admin", password="xxxx")
rr = results.ResultsReader(service.jobs.export('search index=my_index latest=-27D@d' ))
for result in rr:
if isinstance(result, results.Message):
continue
elif isinstance(result, dict):
final = dict(result)
data = final['_raw']
send_string = json.dumps({"event" : data,"source" : "test"},ensure_ascii=False).encode('utf8')
url='http://host2:8088/services/collector'
authHeader = {'Authorization': 'Splunk 5fbxxxx'}
#Send data to Splunk
response = requests.post(url, headers=authHeader, data=send_string, verify=False)
if response.status_code == 200:
print("Successfully pushed the data to Splunk source")
else:
print("Failed to push the data to Splunk source")
Answers:
If index my_index does not exist on host2 then just copy the directory $SPLUNK_DB/my_index to host2, add my_index to indexes.conf, and restart Splunk.
I managed to do this with the Splunk Docker image. I imagine it’s the same with a regular installation.
Note: In this example, $SPLUNK_HOME
=== /opt/splunk
First I backed it up:
mkdir splunk_backup
cd splunk_backup
# Back up index data
mkdir -p ./opt/splunk/var/lib/splunk
sudo docker cp $container:/opt/splunk/var/lib/splunk/defaultdb ./opt/splunk/var/lib/splunk
# Back up index configurations and dashboards
# - config is at /opt/splunk/etc/apps/search/local/indexes.conf
# - dashboards are at /opt/splunk/etc/apps/search/local/data/ui/views
mkdir -p ./opt/splunk/etc/apps/search
sudo docker cp $container:/opt/splunk/etc/apps/search/local ./opt/splunk/etc/apps/search
# Back up users and reports
mkdir -p ./opt/splunk/etc
sudo docker cp $container:/opt/splunk/etc/users ./opt/splunk/etc
Then I went to the new server, launched Splunk, and stopped it:
sudo docker run --env SPLUNK_START_ARGS="--accept-license" --env SPLUNK_PASSWORD="FILL_THIS_IN" -p 8000:8000 -p 8088:8088 -p 9997:9997 -d --restart unless-stopped splunk/splunk:latest
sudo docker ps # wait for it to say (healthy) then grab container ID
sudo docker stop $new_container
Then I restored it on the new server:
cd splunk_backup
sudo docker cp ./opt/splunk/ $new_container:/opt
Then I started the new server back up:
sudo docker start $new_container
As far as I can tell, all of my data, indices, users, reports, and dashboards were copied over successfully!
I have a Splunk server with index data for 650k events. I want to migrate the entire data from one instance to another new instance.
I tried using a migration script with data field -27D@d but I can only migrate 50k data.
-27D@d is the point from where initial data is available.
Can you please help me here?
Here’s the code :
import splunklib.client as client
import splunklib.results as results
import json
import requests
send_string = ""
service=client.connect(host="host1", port=8089, username="admin", password="xxxx")
rr = results.ResultsReader(service.jobs.export('search index=my_index latest=-27D@d' ))
for result in rr:
if isinstance(result, results.Message):
continue
elif isinstance(result, dict):
final = dict(result)
data = final['_raw']
send_string = json.dumps({"event" : data,"source" : "test"},ensure_ascii=False).encode('utf8')
url='http://host2:8088/services/collector'
authHeader = {'Authorization': 'Splunk 5fbxxxx'}
#Send data to Splunk
response = requests.post(url, headers=authHeader, data=send_string, verify=False)
if response.status_code == 200:
print("Successfully pushed the data to Splunk source")
else:
print("Failed to push the data to Splunk source")
If index my_index does not exist on host2 then just copy the directory $SPLUNK_DB/my_index to host2, add my_index to indexes.conf, and restart Splunk.
I managed to do this with the Splunk Docker image. I imagine it’s the same with a regular installation.
Note: In this example, $SPLUNK_HOME
=== /opt/splunk
First I backed it up:
mkdir splunk_backup
cd splunk_backup
# Back up index data
mkdir -p ./opt/splunk/var/lib/splunk
sudo docker cp $container:/opt/splunk/var/lib/splunk/defaultdb ./opt/splunk/var/lib/splunk
# Back up index configurations and dashboards
# - config is at /opt/splunk/etc/apps/search/local/indexes.conf
# - dashboards are at /opt/splunk/etc/apps/search/local/data/ui/views
mkdir -p ./opt/splunk/etc/apps/search
sudo docker cp $container:/opt/splunk/etc/apps/search/local ./opt/splunk/etc/apps/search
# Back up users and reports
mkdir -p ./opt/splunk/etc
sudo docker cp $container:/opt/splunk/etc/users ./opt/splunk/etc
Then I went to the new server, launched Splunk, and stopped it:
sudo docker run --env SPLUNK_START_ARGS="--accept-license" --env SPLUNK_PASSWORD="FILL_THIS_IN" -p 8000:8000 -p 8088:8088 -p 9997:9997 -d --restart unless-stopped splunk/splunk:latest
sudo docker ps # wait for it to say (healthy) then grab container ID
sudo docker stop $new_container
Then I restored it on the new server:
cd splunk_backup
sudo docker cp ./opt/splunk/ $new_container:/opt
Then I started the new server back up:
sudo docker start $new_container
As far as I can tell, all of my data, indices, users, reports, and dashboards were copied over successfully!