Overview
By training models in Google's Teachable Machine to learn our gestures, we could swing our hands or make a face to send the corresponding reaction command to Animal Crossing's API.
Reverse Engineer Animal Crossing's API
The NSO app on the phone allows us to send reaction commands to the game. By using a tool called mitmproxy, we could know what requests are sent from our phone and simulate the reaction command.
brew install mitmproxy
Or use pip install mitmproxy.
Run mitmproxy
mitmproxy
Or if you prefer the web interface run mitmweb.
mitmweb
Install the mitmproxy certificate on your phone
With your phone connected to the same internet as your computer, visit http://mitm.it/ and install the certificate. In the internet settings on your phone, add a manual proxy that points to your computer's IP address.
Checking IP Address on your Mac
Setting Manual Proxy
Download Certificate on http://mitm.it
About > Certificate Trust Settings > Enable Certificate
Sending Requests through Nintendo Switch App
Now launch the NSO app on the phone and play around with the Animal Crossing App. You should see your phone's request data coming in through the mitmproxy terminal. We can start finding out the request format of reactions by sending them from our phone.
The request endpoint for messaging and reaction is api/sd/v1/messages. Click on it and you should see the cookies and form data of this post request.
The post data is as follows.
{
"body": "Smiling",
"type": "emoticon"
}
Tip: Press q in the mitmproxy terminal to return to the request list.
These are some of the reaction types I've collected: Hello, Greeting, HappyFlower, Negative, Apologize, Aha, QuestionMark...
Note: I don't have all the reactions in my game right now. It would be great if anyone could provide the other reaction values!
Accessing Nintendo Switch API
Access to Nintendo Switch API requires making multiple requests to Nintendo's server with an authentication token.
Full tutorial:
Successful authentication will give us three values:
- _g_token cookie
- _park_session cookie
- authentication bearer token
import requests
user_auth_app_head = {
'Host': 'web.sd.lp1.acbaa.srv.nintendo.net',
'User-Agent': 'Mozilla/5.0 (Linux; Android 7.1.2; Pixel Build/NJH47D; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/59.0.3071.125 Mobile Safari/537.36',
'Accept': 'application/json, text/plain, */*',
'Connection': 'keep-alive',
'Referer': 'https://web.sd.lp1.acbaa.srv.nintendo.net/?lang=en-US&na_country=US&na_lang=en-US',
'Authorization' : 'tmp',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-us'
}
def sendReaction(reaction_value):
data = {
'body': reaction_value,
'type': 'emoticon'
}
res = post_AC_NSOJSON(user_auth_app_head, data, 'https://web.sd.lp1.acbaa.srv.nintendo.net/api/sd/v1/messages')
if res is not None:
if 'status' in res:
if res['status'] == 'success':
return 'Reaction sent!'
elif 'code' in res:
if res['code'] == '4102':
refresh_tokens()
return sendReaction(reaction_value)
if res['code'] == '3002':
return 'Reaction not found'
if res['code'] == '1001':
return 'Animal Crossing Session Not Connected'
return res
def post_AC_NSOJSON(header, body, url):
h = header
h['Authorization'] = 'Bearer ' + tokens['ac_b']
pcookie = {}
pcookie['_gtoken'] = tokens['ac_g']
pcookie['_park_session'] = tokens['ac_p']
r = requests.post(url, headers=h, cookies=pcookie, json=body)
thejson = json.loads(r.text)
return thejson
Test and see if it works :)
sendReaction('Aha')
Teachable Machine
Google's Teachable Machine is an easy-to-use online tool to train models to recognize your speech, photo, and video. If you're new to machine learning, I highly recommend watching Google's 5 minute tutorial.
First create a Pose Project.
Choose Webcam for Pose Samples. Name your first class neutral and record yourself without any gestures. Then add extra classes such as clapping or waving. You can be as creative as you want.
When you’re done, press train. When training is complete, you can test the model in the preview panel. Once you’re satisfied, press Export Model above the preview panel and download the TensorFlow model.
We can use the provided Tensorflow.js Sample Script for a simple user interface. Copy the sample script to an empty html file and serve it through Node.js.
npm install http-server -g
cd my-pose-model
http-server
Insert our API call inside the predict() function. The API endpoint should direct to our python server to send the reaction.
const confidence = 0.8; // Confidence range is 0 to 1
async function predict() {
...
const prediction = await model.predict(posenetOutput);
for (let i = 0; i < maxPredictions; i++) {
const classPrediction = prediction[i].className + ": " + prediction[i].probability.toFixed(2);
// Insert the API call here
if (prediction[i].probability > confidence) {
callReaction(prediction[i].className);
}
labelContainer.childNodes[i].innerHTML = classPrediction;
}
// finally draw the poses
drawPose(pose);
};
let clapping = 0;
const threshold = 5; // number of times of detection to run API
async function callReaction(predictionClassName) {
if (predictionClassName == 'Clapping') {
clapping += 1
if (clapping > threshold) {
fetch('https://myapi.com/?reaction=Clapping'); // Change to your own API endpoint
clapping = 0; // reset for threshold
}
}
}
Be creative and have fun!
Summary
- Reverse engineer private APIs with mitmproxy
- Send API requests with Python
- Use Google's Teachable Machine for ML prototyping
Top comments (3)
Amazing post.
Thanks! Glad you liked it.
Loved it, this post has many new toys to play with.