I recently took on the task of allowing a user of a React Native app I’m helping build upload a custom profile picture. It sounded like a relatively simple task when I was estimating it in our sprint planning. However, I still allowed myself some grace since I’d never done such a thing before and put 8 hours on it. Little did I know what was to come.
See, I knew our backend was running Ruby on Rails (RoR) and I knew that Active Storage is now the thing but I didn’t realize the issues I would run into when I threw Amazon Web Services (AWS) S3 into the mix. I had heard good things bout Active Storage though I hadn’t worked with it any, I know RoR well enough to know that the things they add are intentional and typically well thought out, and I also knew my experience with S3 was while the configuration could be somewhat complex when it comes to IAM roles and things once it was running the way you wanted it should be pretty easy to use. Especially for something that was going to be public.
Early on in my work on this task I was informed by the back end engineer that Active Storage had this pretty neat way of allowing the client application to send files directly to S3 and just sending a reference string to the Rails server. This is preferred because instead of sending the data from the client to the Rails server to Amazon it goes directly from the client to Amazon. Bypassing one step speeds everything up and also saves some load on the server. I thought to myself this was pretty cool. We at Airship had done this before with a web app with solid results. I had that code to reference and base my work off of.
Where things start to go wrong…
This is where things start to splinter. I start to digest the code from the web app we created:
import axios from "axios";
import SparkMD5 from "spark-md5";
const getUploadInfo = async (file) => {
const checksum = await createFileChecksum(file);
return axios.post(
`${process.env.BASE_URL}/rails/active_storage/direct_uploads`,
{
blob: {
filename: file.name,
content_type: file.type,
byte_size: file.size,
checksum: checksum
}
}
);
};
export const createFileChecksum = async (file) => {
return new Promise((resolve, reject) => {
const chunkSize = 2097152; // 2MB
const chunkCount = Math.ceil(file.size / chunkSize);
var chunkIndex = 0;
const md5Buffer = new SparkMD5.ArrayBuffer();
const fileReader = new FileReader();
const readNextChunk = () => {
if (chunkIndex < chunkCount || (chunkIndex === 0 && chunkCount === 0)) {
const start = chunkIndex * chunkSize;
const end = Math.min(start + chunkSize, file.size);
const fileSlice =
File.prototype.slice ||
File.prototype.mozSlice ||
File.prototype.webkitSlice;
const bytes = fileSlice.call(file, start, end);
fileReader.readAsArrayBuffer(bytes);
chunkIndex++;
return true;
} else {
return false;
}
};
fileReader.addEventListener("load", event => {
md5Buffer.append(event.target.result);
if (!readNextChunk()) {
const binaryDigest = md5Buffer.end(true);
const base64digest = btoa(binaryDigest);
resolve(base64digest);
}
});
fileReader.addEventListener("error", () =>
reject(`Error reading ${file.name}`)
);
readNextChunk();
});
};
export const uploadFile = async (file) => {
const uploadInfo = await getUploadInfo(file);
await axios.put(uploadInfo.data.direct_upload.url, file, {
headers: uploadInfo.data.direct_upload.headers
});
return uploadInfo.data.signed_id;
};
Real quick, getUploadInfo()
sends the relevant info about the file the the Rails back end and returns what’s needed to direct upload to S3. createFileChecksum()
is used by getUploadInfo()
to calculate the base64 encoded md5 checksum of the file being sent. While Amazon does not require this Rails does. Lastly, uploadFile()
uploads the file to AWS and then returns the signed_id
that is then sent to Rails so it can associate that file with whatever it is in the back end.
I later realized most of this code came from somewhere else, maybe even the @rails/activestorage package. I found similar code living in a <a href="https://github.com/rails/rails/blob/master/activestorage/app/javascript/activestorage/file_checksum.js">file_checksum.js</a>
file in the Rails repository on GitHub. No matter the source of the code, there was an issue. I don’t have access to the FileReader
api on mobile. I’m working in React Native and not a browser. So now the search commences to doing this exact same thing in React Native.
All the things that didn’t work
Actually, I’m not going to bore you with everything that didn’t work. I honestly don’t think you care. You probably Googled how to do this and it’s NOWHERE TO BE FOUND on the internet. Yet, this direct upload has been a feature in Rails for a while. You might have even landed on the Rails issue Make ActiveStorage work for API only apps and a comment there:
For those on react native, I was able to get direct uploads working using rn-fetch-blob
for md5 hashing (which is output in hex), then converting its hex output into base64 using buffer
for calculating the checksum. To lookup the content_type, I used react-native-mime-types
, and last but not least, used rn-fetch-blob
again for calculating the size. Then, just follow the communication guidelines pointed out by @cbothner, and if the files are big, use rn-fetch-blob
for efficiently uploading the file.
– Samsinite
So, I tried to follow the above thread and I couldn’t get it to work. Granted, that comment is almost 6 months old and in JavaScript time that’s a lifetime ago. The main issue I ran into is I could not for the life of me get the checksum to match up with what Amazon calculated on their side. I kept getting responses of “The Content-MD5 you specified was invalid”. I tried MANY ways of generating the md5 checksum and they all ended up with the same Content-MD5 message being returned from AWS.
So here’s how I ended up getting it to work (why you’re really here):
import axios from "axios";
import Config from "react-native-config";
import RNFetchBlob from "rn-fetch-blob";
import AWS from "aws-sdk/dist/aws-sdk-react-native";
import { Platform } from "react-native";
import { Buffer } from "buffer";
const { fs } = RNFetchBlob;
AWS.config.update({
accessKeyId: Config.AWS_ACCESS_KEY_ID,
region: Config.AWS_REGION,
secretAccessKey: Config.AWS_SECRET_ACCESS_KEY
});
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const getUploadInfo = async (fileInfo, file) => {
const params = {
Bucket: Config.AWS_BUCKET,
ContentType: fileInfo.type,
Key: fileInfo.fileName,
Body: file
};
const psUrl = s3.getSignedUrl("putObject", params);
const checksum = unescape(psUrl.split("&")[1].split("=")[1]);
return axios.post(
`${Config.API_UPLOAD_HOST}/rails/active_storage/direct_uploads`,
{
blob: {
filename: fileInfo.fileName,
content_type: fileInfo.type,
byte_size: fileInfo.fileSize,
checksum: checksum
}
}
);
};
export const uploadFile = async (fileInfo) => {
const uri =
Platform.OS === "ios" ? fileInfo.uri.replace("file://", "") : fileInfo.uri;
const file = await fs
.readFile(uri, "base64")
.then(data => new Buffer(data, "base64"));
const uploadInfo = await getUploadInfo(fileInfo, file);
const { headers, url } = uploadInfo.data.direct_upload;
try {
await axios.put(url, file, { headers: { ...headers } });
} catch (e) {
throw e;
}
return uploadInfo.data.signed_id;
};
This is definitely not the most elegant solution. I haven’t refactored it at all yet either. However, it works. In the world of code that means something. So what in the world is going on here? I’ll walk through it, although, I’ll jump around the file some. First, I setup the aws-sdk
and a new s3
instance. I’m using react-native-config
to manage environment variables here. I initially did this to see if I could get the signed_id
I needed by just bypassing Rails and uploading directly to AWS, that didn’t work. However, what I noticed when I generated a pre-signed URL for uploading via the aws-sdk
was that the URL contained and md5 checksum!
Back to the code
Okay, the code, walk through it, here we go. I call uploadFile()
in the response from react-native-image-picker
on my screen component. That’s where the fileInfo
argument comes from. I then get the proper URI based on the OS, and read the file with rn-fetch-blob
. I turn that data into a Buffer because the aws-sdk
on accepts certain types of files when creating a pre-signed URL. I then pass the fileInfo
and the file
along the getUploadInfo()
. getUploadInfo()
then creates a pre-signed URL using the s3
instance we setup earlier and does some hacky string manipulation (needs a refactor) to acquire the checksum. Now, I can use that checksum (which Amazon code created) to get the direct upload URL and headers from Rails. Lastly, I upload the file to AWS and return the signed_id
which I send along to Rails elsewhere in my code.
Ultimately, this was a pretty frustrating problem to fight against. However, it felt so good when I uploaded a file and saw the user profile image change. I actually got up and ran around my home office with my hands in the air rejoicing. I’m also stoked that I can share this solution and see how others might improve on what I did or figure out better ways to go about this. I’m not convinced this is the best solution to this problem, however, it’s a solution that works.
From my yarn.lock
:
– react-native v0.60.5
– react-native-image-picker v1.1.0
– rn-fetch-blob v0.10.16
– aws-sdk v2.532.0