Category Archives: API

How to solve Postman “Could not get any response” error

Trying to build a REST API can be really frustrating. You install Express.js with mongoose or mongodb, and then create a schema, just following the instructions, and you think everything should work. You set up Postman to test the API, hit the API, and Postman stays at “sending request..” for a while. You wait… and wait… and then eventually you see the message “Could not get any response”.

Postman Could not get any response
Postman Could not get any response

Sometimes, it works fine to set everything up all at once. But, it can often happen that there are just too many moving parts. When that happens, and something breaks, it can be very difficult to figure out the point of failure.

As an example, suppose you have an Express router which handles a POST request like this using a mongoose model:

const router = require('express').Router();
// ... mongoose code here
router.post('/registerKitten', async (req, res) => {
    const fluffy = new Kitten({ name: 'fluffy' });
    try {
        await fluffy.save();
        res.json(fluffy);
    } catch(err) {
        res.status(400).send(err);
    }
})

If Postman times out when hitting the API, you can’t know whether the problem is in Express.js or Mongoose.

The first, simple thing you can do is add some console logging to see if your route is executed:

const router = require('express').Router();
// ... mongoose code here
router.post('/registerKitten', async (req, res) => {
    console.log("registerKitten");
    const fluffy = new Kitten({ name: 'fluffy' });
    try {
        console.log("awaiting save...");
        await fluffy.save();
        console.log("save is finished");
        res.json(fluffy);
    } catch(err) {
        res.status(400).send(err);
    }
})

That won’t tell you the whole story, however. If you see “awaiting save…” but don’t see “save is finished”, then doing this has helped you locate the problem – mongoose is hanging.

At this point, you may be better off writing a small Node.js test script which will just exercise the code related to mongoose, independently of Express. That way, you can debug the problem without having to deal with Express code or restart your Express server.

Here’s some sample code. You’ll have to adjust this to your own specific case, but the idea is to put this code into a script (test.js) and run it from the command line with node test.js:

"use strict";
const mongoose = require('mongoose');
const Kitten = require('./models/Kitten');

mongoose.connect(process.env.MONGO_DB_URL, { useNewUrlParser: true });
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));

// Set up some mock user data
const fluffy = new Kitten({ name: 'fluffy' });

// When the db is open, 'save' this data.
db.once('open', async function () {
    // we're connected!
    console.log("db opened... going to save the kitten");
    await fluffy.save();
    console.log("printing fluffy");
    console.log(fluffy);
    db.close();
});

// Print that db connection has been closed.
db.once('close', function () {
    console.log('close');
});

Notice that the code references an environment variable, process.env.MONGO_DB_URL. You’ll need to set that in your terminal. In Linux, you can do that by using the export command: export MONGO_DB_URL=mongodb://127.0.0.1:27017/myappdb.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

What causes “SyntaxError: JSON Parse error: Unrecognized token ‘<'" in React Native?

TL;DR: In React Native, you will get “SyntaxError: JSON Parse error: Unrecognized token ‘<‘” if your URL returns a 404 error, or in general if the content is not a JSON string.

In a recent post, I showed how to display a list of movies that had been fetched from a REST API. It worked great, but I wondered what would happen to my app’s user if their device was offline, or if the REST API ever went down. To mimic this behavior, I changed the URL by adding the number 1 at the end of it, like this: “https://facebook.github.io/react-native/movies.json1”.

And here’s what I saw in the emulator:

SyntaxError: JSON Parse error: Unrecognized token ‘<‘

The red screen says “SyntaxError: JSON Parse error: Unrecognized token ‘<‘”. That may be confusing, although if you work with REST APIs for any time, you’ll soon come to recognize what it means. Meantime, how do we investigate this?

When I load up this test URL in a web browser, I see content which looks like this:

<!DOCTYPE html>
<html>
  <head>
  ...Page not found...
</html>

It’s a fancy 404 error page. That explains why response.json barfs on this; it’s not JSON. Your app expected a JSON string. It tried to parse the string into a JavaScript object, and couldn’t handle a non-JSON string. As a reminder, here’s that fetch call:

componentDidMount() {
    return fetch('https://facebook.github.io/react-native/movies.json1')
        .then((response) => response.json())
        .then((responseJson) => {
...
        })
        .catch((error) => {
            // TODO FIXME replace the red screen with something informative.
            console.error(error);
        });
}

In the longer term, I will want to replace that red screen of death with a nice error page which instructs the user what to do. I’m still developing my application, however, and as a dev, I’d rather see the stack trace for errors like this when they occur.

So to deal with this, I’ll do two things: 1) I’ll add a “TODO FIXME” note in my code. When I’m cleaning up code in the end stages of development, I know to look for these types of comments which indicate work still needs to be done. 2) I’ll open an issue in my issue tracker which will let everyone on my team know that there’s something that still has to be handled in building the application. I’ll bring this to the attention of anyone who needs to know (a project manager, perhaps). The project manager may assign a designer to build a page with some graphics or specific text to display to the user in case of this error.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

How to use the Amazon AWS SDK for Textract with PHP 7.0 Asynchronously

A few days ago, I got an interesting question about my post which describes using the Amazon AWS SDK for Texttract. The question was “How can I d this with a PDF stored in S3? I know you need to use analyzeDocumentAsynch but unsure how to then get the results of the Asynch operation“.

It turns out to be pretty easy, once you’ve got the synchronous example running. The synchronous Textract example is described in that previous blog post.

Here are the code changes you need to make. Keep all the source code as before, but starting with the call to analyzeDocument, replace that and the following lines with this code:

$promise = $client->analyzeDocumentAsync($options);
$promise->then(
    // $onFulfilled
    function ($value) {
		echo 'The promise was fulfilled.';
		processResult($value);
    },
    // $onRejected
    function ($reason) {
        echo 'The promise was rejected.';
    }
);

// If debugging:
// echo print_r($result, true);
function processResult($result) {
	$blocks = $result['Blocks'];
	// Loop through all the blocks:
	foreach ($blocks as $key => $value) {
		if (isset($value['BlockType']) && $value['BlockType']) {
			$blockType = $value['BlockType'];
			if (isset($value['Text']) && $value['Text']) {
				$text = $value['Text'];
				if ($blockType == 'WORD') {
					echo "Word: ". print_r($text, true) . "\n";
				} else if ($blockType == 'LINE') {
					echo "Line: ". print_r($text, true) . "\n";
				}
			}
		}
	}
}

When you run your PHP code from the command line, you’ll notice a small wait while the asynchronous code processes, and then you’ll see the same output as before.

Here’s a link to the Guzzle Promises project to give you an idea of how to use Promises in PHP.

And here’s the full source example use of analyzeDocumentAsync.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

Simple debugging tool in React Native

TL;DR: If you want to debug React Native code really quickly, console.log and console.warn can help.

In my previous post, I described how I ported the React Clock app to React Native. This is the code for my simple app:

import React, { Component } from 'react';
import { Text, View, Button } from 'react-native';
import Clock from './Clock';

export default class HelloWorldApp extends Component {
    render() {
        return (
            <View style={{ flex: 1, justifyContent: "center", alignItems: "center" }}>
                <Text style={{ fontWeight: 'bold', padding: 10 }}>Hello, world!</Text>
                {/* padding does not work with Button!! */}
                <Button style={{ fontWeight: 'bold', padding: 40 }} title="Click me" >Click Me!</Button>
                {/* Since padding does not work with Button, we add an empty text area */}
                <Text>{""}</Text>
                <Clock />
            </View>
        );
    }
}

For my next project, I decided to do something more realistic. I wanted to figure out how to fetch data from a REST API.

I already had the Android emulator started (see previous post). A quick look at the React-Native networking documentation told me that doing a fetch should be a piece of cake. Because I didn’t want to copy their entire sample app, but just reuse their fetch call, I copied the componentDidMount method, and pasted it into my application above the render method:

componentDidMount(){
return fetch('https://facebook.github.io/react-native/movies.json')
    .then((response) => response.json())
    .then((responseJson) => {

    this.setState({
        isLoading: false,
        dataSource: responseJson.movies,
    }, function(){

    });

    })
    .catch((error) =>{
    console.error(error);
    });
}

(The componentDidMount method may be familiar to you from React.js development.)

I didn’t see any errors when I did this, but I also couldn’t tell whether the fetch method had worked! If I had been building this app using JavaScript in a web browser, I could have quickly checked the results by adding a console.log statement to print out responseJson. I tried this, in fact, but nothing noticeable happened onscreen when I made my change. It took a little while, but I finally noticed that my statements were being logged in the terminal window that was running the Metro server (where I’d run the npm start command)! It took me a while before I noticed this because I’m not usually looking at the terminal unless I’m trying to debug a problem.

A quick search also told me that I could use console.warn to display text on the emulator’s screen. I added console.warn(responseJson); just above the setState call, and I could see that the method had succeeded, and I could also see part of the responseJson content in the YellowBox which appeared. Clicking on this YellowBox warning gave me a fullscreen view of the JSON.

Probably it’s a bad idea to display debug messages using console.warn, but if I were debugging on a device without the help of Metro server, I think console.warn would come in handy.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

How to access an AWS RDS using JDBC in your Android app – Part I

You’ve got a huge spreadsheet that has a lot of data in it, and you’ve built an Android app which works like a search engine on the data. Nice! But there’s a problem: when you build your app with all of that data in it, the APK is huge! You want to reduce the size of the app. And you also want to offload the search functionality onto a relational database, which is probably going to provide a more efficient search. How do you start?

This blog post explores one way to do it. It’s “quick and dirty”, and it’s not recommended to do things exactly this way. I’ll talk about why in Part II. But this method will give you a start.

Here’s a quick sketch of the idea: You put your data in the cloud using the Amazon Relational Database Service (RDS). Then you add JDBC calls to your app to access the cloud. It’s pretty quick. Here are the steps, using a simple example that I tried for myself.

Technical Details: My development environment runs Ubuntu 16.04, and I have a MySQL client and the MySQL database already installed on my local machine. I use Android Studio 3.5 IDE for building Android apps. Also, I have an Amazon AWS account set up already. You can follow this tutorial if you don’t have any of that, but then specific steps will differ for you.

Get Your Data Source Ready

For my data source, I downloaded some food inspection data from healthdata.gov in a csv (“comma-separated values”) format. I opened the csv file in a spreadsheet, selected some of the columns that I wanted, and exported them to another file, also in csv format. You can use the csv file that I generated by starting with this small, truncated version of the data. Later, you can use or create your own, very large data source for experiments.

Create an Amazon RDS MySQL Database

Visit the Amazon MySQL RDS page and click “Get Started”. If you don’t have an AWS account, you will need to sign up for one, first. Check out the pricing, if you are worried. There’s a free tier, great!

If you’re already signed in, another way to get started is to visit the AWS Management Console, search for “RDS”, and click the result for “Managed Relational Database Service”.

At this point, you’ll see a “Create Database” button. Choose “MySQL”, and click the “free tier”. Type in healthdata-1 for the name. Choose a username when requested. I’m using fullstackdev. Pick a secure password. The other parts of the form are straightforward. You can think about using IAM based authentication later. For this proof-of-concept piece of work, let’s keep it simple, and use password based authentication. For the rest, accept all defaults.

At this point, a page opens which says the database is being created.

AWS RDS creating database

Click the “modify” button. You’ll see that you can modify various things about the database later, if you want. Just be aware of this. For right now, you’ll need to “modify” the RDS so that it can be accessed from external sources – so choose “Public accessibility” and set it to Yes, and make sure to click the “Continue” button at the bottom of the page to save your changes. You need to do this so that you can create a database, load data into it, and access it via JDBC.

Now we’ve got an RDS in the cloud, and it’s accessible from our home environment. Next, we need to create a database.

Create Your Database and Manage Access

If you click the DB identifier in your RDS console, you will see an area called “Connectivity & security”. That area tells you what your endpoint is, and what your port is. The port defaults to 3306. Your endpoint will be something like healthdata-1.c84gpzpanfrn.us-east-1.rds.amazonaws.com. This is a URL you can use to access the database from another machine.

In the ‘Security’ pane, at the right, you will see your VPC (Virtual Private Cloud) security groups with a link to the default. Click that. It will take you to your Security Groups area. The default VPC security group should be preselected. Look at the bottom panel, where you should see the “Description”, “Inbound”, “Outbound”, and “Tags” tabs. Click “Inbound” and hit the “Edit” button. Click the “Add Rule” button, select MySQL/Aurora, make sure that the protocol is set to TCP/IP and the port to 3306, thne choose “MyIP” as the source. Your IP address will be set when doing this. Then hit the “Save” button.

Remember that you’ve added this rule just for your own IP address! You’re doing this for test purposes. Later, if you want, you can make different inbound rules, but this setup is good for a proof-of-concept.

Now the RDS is accessible. I am comfortable using the command line for MySQL client, so I used this to step into the cloud, and create my database. You can use whatever tool you want to do this.

First, I connected via this command:

mysql -u fullstackdev -P 3306 -p -h healthdata-1.c84gpzpanfrn.us-east-1.rds.amazonaws.com healthdata-1

The -p option tells the client to ask for a password interactively. I gave the password that I had set up earlier, and immediately, I was connected. This is what I saw:

Type: MySQL/Aurora,
Protocol: TCP
Port Range: 3306
Source: MyIP
Description: MySQL client

show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| innodb             |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.03 sec)

It’s the usual default MySQL database setup.

I had already designed a database around the food inspection data that I had decided to import. I created my own database like this:

CREATE DATABASE food_inspections;
USE food_inspections;
DROP TABLE health_reports;
CREATE TABLE health_reports (
    id INT AUTO_INCREMENT PRIMARY KEY,
	inspection_id INT,
	dba_name TEXT,
	aka_name TEXT,
	license_num INT,
	facility_type TEXT,
	risk TEXT, address TEXT,
	city TEXT, state TEXT,
	zip TEXT, inspection_date DATE,
	inspection_type TEXT, results TEXT,
	violations TEXT, location TEXT
);

I didn’t add any indexes for the columns other than the primary key. That can all be added later, when performance tuning.

Push Your Data to Amazon RDS MySQL Database

AWS provides instructions for pushing data to a MySQL RDS in the cloud. Since we have a new RDS which is already set up, we can skip straight to step 5, “Load the Data”.

They tell you to use the mysqlimport command, and you can do that if you want. There are other tools that can be used to import data, too. However, since I was already in the MySQL client, I used the LOAD DATA command, like so:

LOAD DATA LOCAL INFILE 'Food_Inspections_small.csv' INTO TABLE health_reports
	FIELDS TERMINATED BY ',' ENCLOSED BY '"'
    LINES TERMINATED BY '\n' (@inspection_id, @dba_name, @aka_name,
		@license_num, @facility_type, @risk, @address, @city, @state, @zip,
		@inspection_date, @inspection_type, @results, @violations, @location)
	SET inspection_id = @inspection_id, dba_name = @dba_name, aka_name = @aka_name,
		license_num = @license_num, facility_type = @facility_type, risk = @risk,
		address = @address, city = @city, state = @state, zip = @zip,
		inspection_date = @inspection_date, inspection_type = @inspection_type,
		results = @results, violations = @violations, location = @location;

Keep in mind that you may need to modify this command for your own purposes. I had launched the MySQL client from within the same directory where my Food_Inspections_small.csv was located, , so this command worked for me straightaway.

Now, my RDS is all set up, complete with data! That is half the battle. In my next blog post, I’ll cover how to access the RDS using an Android app.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

How does S3 generate the URL with putObject method?

Recently, I noticed a question on a forum about the AWS SDK S3Client class.

The person was using the putObject method of S3Client to upload a file to an Amazon S3 bucket.

After that, he needed to figure out the URL which could be used to access that file. He had figured out that uploading a file called cat.gif could be accessed with the URL “https://s3.eu-west-3.amazonaws.com/aws.mybucket.es/mysite/httpdocs/cat.gif”.

The problem was that when he uploaded a file whose name included special characters, such as an accented o – “ó” – he couldn’t figure out a consistent way to construct the URL. A character with an accent got URL encoded, but the parenthesis character in a file name did not!

He was trying to figure out the implementation details for the putObject method, and couldn’t find any documentation about it.

The answer to his question was that he was asking the wrong question! There’s a software principle that you should “write code to the interface, not to the implementation“.

As consumers of the S3Client API, we should not be trying to figure out the URL to an uploaded file. Rather, we should be asking the interface for the URL. If AWS revealed the details of their URL construction scheme, it would be very painful if they ever decided to change it, both for them and for users of S3. Further, programmers everywhere would be forced to implement the algorithm that AWS declared for URL construction in all the different languages that are supported by the AWS SDK. That’s a lot of duplicated effort.

Fortunately, AWS gives us an interface that can be used to obtain the URL after a file is uploaded. The result of S3Client->putObject contains an ObjectURL property. We can use that to get the URL, which we can record however we want for later use. Here’s an example:

...
$result = $s3->putObject(...);
$url = $result['ObjectURL'];
...

The full source code for this example of using the S3Client putObject method is at github.

So you see that there’s no need to figure out how AWS implements the URL for our file. AWS gives us the URL immediately when our file is uploaded.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

How to use Amazon AWS Translate with PHP 7.0

Amazon AWS Translate is a pretty cool translation service. You can get started free of charge. Let’s give it a try. This demo assumes you’ve got an AWS account (if not, first go get that). I’m using PHP 7.0 on an Ubuntu 16.04 box.

First, create a new IAM (Identity and Access Management) group. Let’s call it TranslateGroup. Give it TranslateReadOnly permissions. Don’t know how to do this? Sign into your AWS console, and search for “IAM”. That will take you to the right place for dealing with IAM.

Add a new user to this group. Let’s call this user TranslateUser. Give it programmatic access only.

When you see your Access key ID and secret, copy them into your AWS credentials file (in Linux, this is located under ~/.aws/credentials). Set the header for the profile to be [TranslateUser].

Now that you’ve created a user, make sure you’ve installed the AWS PHP SDK. I did this in my demo directory, just by downloading the SDK and unzipping it. The contents of my directory are pretty simple:

~/TranslateDemo$ ls -lairt
total 164
18226436 drwxr-xr-x   3 fullstackdev fullstackdev     4096 Jul 11 15:06 Psr
18226304 drwxr-xr-x   2 fullstackdev fullstackdev     4096 Jul 11 15:06 JmesPath
18226324 drwxr-xr-x   7 fullstackdev fullstackdev     4096 Jul 11 15:06 GuzzleHttp
18226301 -rw-r--r--   1 fullstackdev fullstackdev   129259 Jul 11 15:06 aws-autoloader.php
18226446 drwxr-xr-x 197 fullstackdev fullstackdev    12288 Jul 11 15:06 Aws
   6961244 -rw-rw-r-- 1 fullstackdev fullstackdev      958 Sep 16 20:32 test_translate.php
...

It’s quick and easy to code up the rest. Here’s some demo code (test_translate.php):

<?php
require './aws-autoloader.php';

use Aws\Translate\TranslateClient;
use Aws\Exception\AwsException;

$client = new Aws\Translate\TranslateClient([
    'profile' => 'TranslateUser',
    'region' => 'us-west-2',
    'version' => 'latest'
]);

// Translate from English (en) to Spanish (es).
$currentLanguage = 'en';
$targetLanguage= 'es';
$textToTranslate = "Call me Ishmael. Some years ago—never mind how long precisely—having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world.";

echo "Calling translateText function on '".$textToTranslate."'\n";

try {
    $result = $client->translateText([
        'SourceLanguageCode' => $currentLanguage,
        'TargetLanguageCode' => $targetLanguage,
        'Text' => $textToTranslate,
    ]);
    echo $result['TranslatedText']."\n";
} catch(AwsException $e) {
    // output error message if fails
    echo "Failed: ".$e->getMessage()."\n";
}

Run this from the command line: php test_translate.php. The output is:

Calling translateText function on 'Call me Ishmael. Some years ago—never mind how long precisely—having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world.'
Llámame Ishmael. Hace algunos años, no importa cuánto tiempo precisamente— teniendo poco o ningún dinero en mi bolso, y nada particular que me interesara en la costa, pensé que navegaría un poco y vería la parte acuosa del mundo.

Pretty easy, right?

If you found this interesting, click the subscribe button below! I write a new post about once a week.

How to call the reddit REST API using Node.js – Part IV

This is the last of a 4-part series that describes how to call and use the reddit REST API using Node.js.

In Part I, I talked about using curl to get your access token, which gets you permission to use the reddit REST API.

In Part II, I used that access token to call reddit’s search API. But I was still using curl to do this. I have a very large string output from the API, and I don’t know about you, but I’m not keen on using Linux command line tools to process strings. Since I like JavaScript, I decided to move to Node.js for completing my work.

In Part III, I built a Node.js script which gets my access token from reddit. That script does pretty much what I’d been doing in part I, but it does it using Node.js.

Today, I’m going to complete the work by adding a reddit search API call to my Node.js script, and then using JavaScript’s handy string processing functionality to display the information that interests me.

Recall that I’m pretending to be responsible for Starbucks public relations, and I want to find out what’s being said about Starbucks at reddit, in case I need to do damage control!!

Here’s my new Node.js method which uses reddit’s search API to look for new entries which mention “Starbucks”:

const searchReddit = function (d) {
	const options = {
		hostname: "oauth.reddit.com",
		port: 443,
		path: "/r/all/search?q=Starbucks&sort=new",
		method: "GET",
		headers: {
			"Authorization": "Bearer " + d.access_token,
			"User-Agent": "fullStackOasis NewPostsScraper"
		}
	}

	const req = https.request(options, (res) => {
		// console.log(`statusCode: ${res.statusCode}`)
		let chunks = [];
		res.on('data', (d) => {
			// d is a Buffer object.
			chunks.push(d);
		}).on('end', () => {
			let result = Buffer.concat(chunks);
			let tmpResult = result.toString();
			try {
				let parsedObj = JSON.parse(tmpResult);
				// Print the string if you want to debug or prettify.
				// console.log(tmpResult);
				processSelfText(parsedObj);
			} catch (err) {
				console.log("There was an error!");
				console.log(err.stack);
				// I got an error, TypeError: Invalid data, chunk must be a string or buffer, not object
				// Also I got this, when I'd pushed d.toString to chunks:
				// TypeError: "list" argument must be an Array of Buffer or Uint8Array instances
				process.stderr.write(err);
			}
		});
	})

	req.on('error', (error) => {
		process.stderr.write(error);
	})

	req.end();	
};

As before, you do not have to understand this code in detail to see what’s going on. The input to my searchReddit function has the access_token which I’d previously obtained in Part III. This new code uses that access token to call the reddit search API, doing a search for “Starbucks”.

Buried in that code above is a call to a function processSelfText. I need that because it’s not helpful to have a giant wall of text displayed to me! I need to process this blob of data, and have the script display only the interesting parts.

My function processSelfText grabs the blob of JSON which was returned from reddit’s search API, and loops through it for all the individual reddit threads. It prints out a substring of the thread that contains the mention of “Starbucks”, and also prints out the reddit URL in case I want to read the whole thread. I can quickly skim through the results to see if the thread looks potentially harmful to Starbucks. If it does, then I can go to reddit to respond.

Here’s the string processing code:

const processSelfText = function (obj) {
	if (obj.data && obj.data.children && obj.data.children.length) {
		obj.data.children.forEach(function (item, n) {
			// data is an Object. It may have selftext property
			if (item.data) {
				console.log("Item #" + n);
				if (!item.data.selftext) {
					console.log("Only found a url, no text:");
					console.log(item.data.url);
				} else {
					console.log("Found url and text:");
					console.log(item.data.url);
					showSurroundingText(item.data.selftext);
				}
			}
		});
	}
}

/**
 * Process the input string to 
 * @param {*} str 
 */
const showSurroundingText = function (str) {
	let maxchars = 150;
	// Have to do a lowercase search.
	let found = str.toLowerCase().indexOf("starbucks");
	if (found > -1) {
		// See https://davidwalsh.name/remove-multiple-new-lines
		str = str.replace(/[\r]+/g, " ");
		str = str.replace(/[\n]+/g, " ");
		// If first argument is too large, it's okay, just returns front of string.
		// If second argument is too small, also okay.
		var substring = str.substring(found - maxchars, found + maxchars);
		// Remove the new lines.
		console.log("..." + substring + "...");
	}
};

I run the script from the command line, like this: node reputation-checker.js. I am using Node.js version 8.3, and Ubuntu 16.04, but I think this script will work for most other operating systems and platforms. This is what the output of my script looks like:

Item #0
Found url and text:
https://www.reddit.com/r/aznidentity/comments/cybh97/we_are_not_honorary_white_people_and_do_not/
 …tention span which many folks nowadays unfortunately do not.  I’ll order my morning coffee at an Asian-owned establishment rather than an overpriced Starbucks where it ends up tasting burnt anyway.  I could not care less for Mexican food or a Westerners version of Chinese food. Authentic Asian cuisi…
 Item #1
 Found url and text:
 https://www.reddit.com/r/exmormon/comments/cybd66/mom_why_dont_you_get_a_blessing_me_because_they/
 … of my skin alternates between burning and itching. Sometimes, I get both at once. Which is what happened yesterday. While I was waiting in line at Starbucks, talking to my mom on the phone.  My mom knows I left the church. And she knew I was at Starbucks. But none of that should matter. She’s livin…
 Item #2
 …

I could do more to refine this code – normally, I’d refactor it, and write some tests, and do more error handling, maybe automate it to send me an email periodically… but this is good enough for demonstration purposes. Here’s a link to the entire script, if you want to download it and mess around with it.

I hope you enjoyed this tutorial! Please feel free to use the “subscribe” form if you’d like to keep posted on updates to the “Full Stack Oasis” blog. I only post about once a week.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

How to call the reddit REST API using Node.js – Part III

In Part I and Part II, I called the reddit REST API using the curl command line tool.

Now, I’m going to create a Node.js script that does this. Why create a script, when I can already do what I want with curl?

(1) I can more easily reuse the code that I’ve written. In Part II, I mentioned performing a search for the word “Starbucks” at reddit. If I wanted to do a different type of search, I could alter my script to search for something else instead.

(2) I can more easily execute the code that I’ve written. For example, I could run the script on a daily or hourly basis using a cron job, without having to do anything manually. “Set it and forget it” is awesome!

(3) I can more easily use the code that I’ve written. For example, I can bring in Node.js libraries to process the output of my script, and easily get from it what interests me.

So, now I’m going to move from curl to Node.js to do what I want, repeatedly, in an automated way.

Recall that I am pretending to be in charge of reputation management for Starbucks. I want to get recent comments that come up on reddit that mention Starbucks so I can quickly look for problems (or, hopefully, compliments!), and respond.

Below, I’ve shown just the Node.js code which can be used to retrieve my access token. Notice how much more complicated this is than the single curl command that I used previously! By the way, there are definitely simpler ways to do this using Node.js. I’m writing this example using just the built-in libraries that come with Node.js, which makes things a bit more complicated than they have to be.

const https = require('https');

let postData = "grant_type=password&username=my_user_name&password=MyExcellentPassword";
let username = "my_reddit_id";
let password = "my_reddit_secret";

/**
 * A method to get an access token to call reddit Search API
 */
const getAccessToken = function () {
	const options = {
		hostname: "www.reddit.com",
		port: 443,
		path: '/api/v1/access_token',
		method: 'POST',
		headers: {
			"Content-Type": "application/x-www-form-urlencoded",
			"Content-Length": postData.length,
			"Authorization": "Basic " + new Buffer(username + ":" + password).toString("base64"),
			"User-Agent": "my test bot 1.0"
		}
	}

	const req = https.request(options, (res) => {
		if (res.statusCode === 200) {
			let chunks = [];
			res.on('data', (d) => {
				/*
				* the output data has the format
				* {"access_token": "271295382352-tV_vIeKVRgq7Juh3iYHmW4oyT64",
				* "token_type": "bearer", "expires_in": 3600, "scope": "*"}
				* But d is a Buffer object, and has to be translated into an
			    * object at the end.
				*/
				chunks.push(d);
			}).on('end', () => {
				let result = Buffer.concat(chunks);
				let tmpResult = result.toString();
				try {
					let parsedObj = JSON.parse(tmpResult); // TODO do something with this Object, which contains my access token

				} catch (err) {
					process.stderr.write(err);
				}
			});
		} else {
			console.log("Received a statusCode " + res.statusCode);
		}
	});

	req.on('error', (error) => {
		process.stderr.write(error);
	})

	req.write(postData);
	req.end();
};

getAccessToken();

You don’t have to understand all this code in detail. If you skim it, you should get an idea what it’s doing. You are making an https request to reddit’s API. When a web browser makes an https request, there’s a lot of “stuff” going on under the hood. We have to code up some of that stuff here; that’s what the options object is for. Once the request is made to reddit, the information is returned in “chunks” of binary data over the network. Node.js waits for all of that data to get to us. When it’s finished, we use the built-in Buffer.concat method to concatenate all the binary data into one Buffer object, turn it into a JSON string, and parse that into an Object. The Object contains our access_token property. In my next post, we’ll use that to access the reddit API to search for recent posts about “Starbucks”.

If you found this interesting, click the subscribe button below! I write a new post about once a week.

How to call the reddit REST API using Node.js – Part II

In my previous post, I showed how easy it is to authenticate using the reddit API. Authenticating leaves me with an “access token”, which is just a JSON string which looks like this:

{"access_token": "261295382351-FBXDPTpUam35NR_UTJSXnjl5Pmd", "token_type": "bearer", "expires_in": 3600, "scope": "*"}

The access token gives me permission to use the reddit API. Now I can actually write an application that consumes the reddit API.

In order to motivate this demo, suppose you’re in charge of reputation management at Starbucks, and you want to be notified if anyone says anything about Starbucks at reddit. Fortunately, there’s a handy search API which can be used to search for the word Starbucks.

Here’s how you can use it to search for any mention of “Starbucks” at reddit:

curl -H "Authorization: bearer 261295382351-FBXDPTpUam35NR_UTJSXnjl5Pmd" --user-agent 'my test bot 1.0' -A "my app identifier" https://oauth.reddit.com/r/all/search?q=Starbucks

This returns a list of 25 results. If you check the search API that I linked to above, you’ll find that the default number of results (“limit”) is 25. The documentation doesn’t mention how results are sorted, but it looks like they are sorted by “relevance”, by default. We’d rather get them by “new”, to get the most recent posts. So let’s add that to the query string.

curl -H "Authorization: bearer 261295382351-FBXDPTpUam35NR_UTJSXnjl5Pmd" --user-agent 'my test bot 1.0' -A "my app identifier" https://oauth.reddit.com/r/all/search?q=Starbucks&sort=new

Now, your output is looking pretty hairy – a giant string of JSON text. You can take a look at this output in an online JSON “prettifier”, if you want to get a better idea of what’s being returned. This will be easier if your output is in a file. You can redirect the output of curl to a file. Just make sure that you put your URL in quotes, because the ampersand is interpreted by the shell to mean “start this command in the background”, which is not what you want. Do this:

curl -H "Authorization: bearer 261295382351-FBXDPTpUam35NR_UTJSXnjl5Pmd" --user-agent 'my test bot 1.0' -A "my app identifier" "https://oauth.reddit.com/r/all/search?q=Starbucks&sort=new" > data.json

Notice that so far, all I’ve done is use curl, a command line app, to call the reddit API! Now that I have a pretty good idea of how I’m going to be using the reddit API, I will start writing my Node.js application. I’ll dive into that next week.

If you found this interesting, click the subscribe button below! I write a new post about once a week.