Implementing Slack and Putting it all together [SOC Automation with AI Implementation]

 Goal: To fully integrate ChatGPT using n8n and piece everything together.

Workflow Setup:

First it's important to disable the Splunk alert for now because we are done with the current test.







Then on the n8n server, going back to canvas, we can pin the output so that we don't need to keep triggering the workflow.











Before continuing, I created an API Platform account on openai.com. I clicked on "start building" to set it up, naming my organization "Caser" and the API key name "Caser-SOC-Project", and then generated an API key.












Once I had this copied it was time to go back and start setting up the workflow. This is where I encountered my first major issue. I went on vacation and during that time I wasn't able to work on the project. This led to my session being timed out, and when I logged back in I was no longer able to pin the webhook trigger event. I spent some time looking to see if there was a way to simply revert it or to see if there was a saved instance, but it seems as though these were not options available to me. Finally I realized that the event was saved in under the "executions" tab. The only problem is that I couldn't simply just reinstate it. The only option I could think of was to emulate the trigger using a "Set" node, where I could use the same parameters that I copied in JSON format and used as parameters in the Set node.
















ChatGPT Integration:
Now that we have our setup, we can add ChatGPT to the workflow. Selecting OpenAI will give us another list where I selected "Message a model". Then a setup page will pop up where we have to "Select a Credential" and create a new credential. This is where we can input the API key we got from our OpenAI account. (Using ChatGPT will require purchasing credits, but this project won't require many and the cheapest package is enough.)
















Returning to the "Message a Model" window, there's a few settings that need to be changed. Under "Model", we need to choose which version of ChatGPT we will be using. For this project I'm using "GPT-4.1-Mini". Then we need to set a prompt for the ChatGPT to follow. The prompts can be found on my GitHub (https://github.com/Castro-Erick/AI-Prompt/blob/main/README.md). The first prompt is establishing the AI as an assistant to a Tier one analyst, establishing guidelines for it to follow when determining how it will process the information, and the second prompt is to establish a format for the system to output the information.













Next make sure that the webhook (or in this case the Set node) is connected to the ChatGPT node. Next I added on more message and prompt for the User role. This will allow the AI to parse the input and select the important information out.














After setting all of this up, I realized I was getting an error. The first thing I tried was changing the "Resource" type to "Text" instead of "Conversation". I believed this created a mismatch in the input that ChatGPT was expecting vs what it was expecting. This didn't seem to be the case, so I started doing some research to see if I was the only one who's had this problem before. After some reading I came across someone who had the issue before and apparently it was a bug with the version of n8n I tried using. I then attempted to use an older version of n8n that is known to work, and it still gave me the same error. Finally, I tried using a suggestion I had seen but didn't think would change much. I combined all three prompts into one and instead opted out of making a separate prompt for system, user, and assistant. This surprisingly worked and then I was able to upgrade n8n to the latest version and continue my project.
Adding Slack to the Workflow:
First I created a Slack account, and then once I was finished with the setup, I created an "alerts" channel for testing purposes. Then once the channel is setup, we go back to the n8n workflow and the "send a message" node for slack.



















Next we have to "Create new Credential" in the Slack node. To do this we click on "Open Docs" and scroll down to the "Using API Access Token" section and follow the instructions to generate a token. This also requires creating an app from scratch. I simply named mine "Caser-SOC-Project".
















When adding scopes, you can choose to add only the ones necessary for the bot to accomplish its tasks. For this project I added in all the recommended scopes, but it can be restricted as needed. Once all the scopes are added, then we can scroll to the top of the page and install in the appropriate workspace. After accepting, it should display the token and allow for use with n8n. We can now save and test it.













Before we can properly test the bot, we need to add it to the "alerts" channel in our slack workspace. We can do this by right clicking on our bot and selecting "View app details". Then we can "Add this app to a channel" and add it to our alerts channel. Once the bot is added to the appropriate channel, in order to test it, we can go back and finish setting up the Slack node in n8n. To test it, we change "Send Message To" and set it to Channel. Then we can set From List under Channel to "alerts". Lastly we can just set the Message Text to "test".
































Now that everything is properly set up, we can do one last test to make sure the entire workflow is operating as intended. Going back to the Splunk node, we change the Message Text from "Test" and instead add the content we want output by simply click dragging "content" on the left side panel and dragging it on to the Message Text field. After attempting to run it the first time, I realized the output was not what was intended. The output was [object Object] instead of the intended text so to fix this, simply click drag the "Text" field under Content on the left side instead of the Content field itself. This fixes the issue and outputs the full report.













This was done using an internal source IP. To simulate an external one, we can go back to the ChatGPT node and change the input so that it only pulls the time, computer name, and user name while we provide a source IP of our own. We can take this IP from Abuseipdb.com to simulate a real world attack. We also want to add the HTTP request tool to the ChatGPT Node. We also require an account with Abuseipdb to perform our IP enrichment. Once our account is created, we go to "My API" and click on "Create Key". After creating the key, under the section where it displays the key, we have to click on "Check out our manual" to properly implement it. On the left we simply select Check Endpoint, and then copy the curl command.










We can then return to the HTTP Request Tool and choose "Import cURL" to input the curl command we copied. The next step is to scroll down to the "Query Parameters" section and remove the IP address value field. Then using the button to the right, we select "Let the model define this parameter". Next change the maxAgeInDays value to "1". Then we scroll down further to the "Header Parameter" section so that we can input our API Key into the Value field.


















After this step, all that's left to do is to update ChatGPT's prompt to use the new tool. First I renamed the tool to "AbuseIPDB-Enrichment" and then went to update the prompt. I updated it by simply adding "For any IP enrichment, use the tool named "AbuseIPDB-Enrichment" as the second line. under "Enrich with threat intelligence". Finally it's time for one final test. We can now execute the ChatGPT node and once it's done, our Slack node will post it in our alerts channel.


 







For this test I had to use a Set field to compensate for the input being removed, so in order to properly setup this workflow, we simply remove the "Set" node and add back in the "Webhook" node.











And we're done!

Summary and Reflection:
This was the most difficult part of the assignment because there were so many instances where I ran into issues getting things to work. Ultimately this took a lot of time and patience, but now that this project is done, I will be working on incorporating more tools into this network such as Active Directory, an XDR, an IDS and IPS. This will make a good testing environment to simulate attacks and improve both my defensive and offensive techniques. I look forward to continue expanding my understanding in Cybersecurity!




Comments

Popular posts from this blog

About Me

VM Setup [SOC Automation with AI Implementation]