Découvrez de nombreuses nouveautés à l'occasion de notre lancement du 2e trimestre 2023 : moteur Kora Engine, règles de qualité des données, et bien plus encore | S'inscrire pour une démo
With more applications and software programs come the need for an easy way to communicate with one another. In this complete beginners guide, learn what an API is, how it works, and common types of APIs used today, with real-life examples.
API stands for Application Programming Interface, which allows two or more applications to communicate with one another. In other words, APIs allow companies to serve their tools and services in a fast, simple way. When you send money through PayPal, call an Uber, or add someone on social media, you're using an API.
The key idea is that an application doesn’t need to know the details about how the other application works; it only needs to know about how to use the API. This is the concept of abstraction, which makes it possible for new programs to build on top of the hard work encoded into other programs.
There are many kinds of APIs and a mountain of technical definitions. This post will discuss only a few modern APIs in plain language with practical examples and some hands-on components.
A great example of an API is video games, which make a nice metaphor for the power of abstraction that an interface provides.
The video game player presses buttons, and somehow the video game console knows how to take those buttons and turn them into actions in the game. The player only needs to know the buttons to press in order to have fun with the game; they do not need to know how the machine takes those inputs and uses them to render actions on the screen.
The REST API is the most popular API architecture in modern web applications. REST stands for Representational State Transfer. The underlying theory is nuanced, but in practice a REST API basically means there is a client who makes certain kinds of requests to alter resources on a server. Each request represents the state the client wants the resource to have and attempts to transfer that wish to the server. The server responds on its own without having to remember anything about the client (this is known as stateless requests).
The most common place you will see REST APIs at play is in your browser. You click a button, and the browser will send a request to a server to retrieve a resource and display that resource on your screen.
REST APIs are sometimes described as”RESTful”.
It’s difficult to conceptualize a REST API without a concrete example. Suppose there is a RESTful book API service hosted at https://api.fakebooksite.com. Users can create, read, update, and delete books using the API.
The cURL command is an application that can create HTTP requests and is commonly used to interact with REST APIs from the command line. Here is an example of a request one might submit to the books REST API server with the cURL application:
curl \
request GET \
https://api.fakebooksite.com/v1/authors?author=chuck&sort=title:desc
The curl command specifies that the request should use the “GET” HTTP method.
https://
The REST API server application would be written in a way to respond to this kind of request and serve a response. In this case, the books API might retrieve information about books written by “chuck” from a database, perform a sort operation, and send a response to the cURL client in a JSON format.
JSON stands for JavaScript Object Notation. Despite its name, it is simply a standard file format that any language can use to send and receive data over the internet. The next example will send JSON data to a REST API endpoint.
TIP: APIs often respond with data encoded in JSON format. The jq command is a handy tool for parsing json responses. It is useful to pipe the results of a curl command into jq to see a pretty output, like this (-X is a shortcut for --request):
curl -X GET https://my-cool-site.com/v1/cool-quotes | jq
It’s great to receive data from a REST API endpoint, but often we also want to send a “payload” of data to a REST API. Here is an example that creates/updates the book “Chuck’s Cool Book”:
curl \
--request PUT \
--header “Content-Type: application/json” \
--user chuck:chuck-password \
--data ‘{ “author”: “chuck”, “title”:”Chuck’s Cool Book”, “text”: “When I was a boy, I ate 3 dozen eggs each morning…” }’ \
https://api.fakebooksite.com/v1/books
There are a couple of new things here to notice: - --request PUT
We use this option to compose our JSON payload to send to the server. In this case, we define author, title, and book text in JSON format to send to the server.
In summary, the above cURL command will send an HTTP PUT request to the server at the /v1/books endpoint with a JSON payload. When the server receives the PUT request, it would authenticate the user, check whether the user is authorized to create that resource, perhaps update a database record for that book, and then return a response to the client (e.g. status code 200 to report success).
NOTE: The server might behave very differently depending on which HTTP method is used. In this case, a GET request to https://api.fakebooksite.com/v1/books?author=chuck&title=Chuck%27s+Cool+Book might not require authentication at all and give a response that includes the contents of “Chuck’s Cool Book” in JSON format.
Confluent offers a REST API that allows users to interact with resources related to its data streaming platform. This makes for a nice real-world example for what a REST API is and how it’s used.
Confluent Server includes a fully managed, cloud-native distribution of Apache Kafka, the popular open-source data streaming platform. Here is an example cURL command to Confluent Server’s REST API that creates a so-called “Kafka topic” (see https://developer.confluent.io/learn-kafka/apache-kafka to learn more about Apache Kafka, including what Kafka topics are):
curl --silent -X POST -H "Content-Type: application/json" \
--data '{"topic_name": "my-cool-topic"}' \
-u chuck:chuck-password \
http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics | jq
Here are a few things to notice about the cURL command:
jq
command to make it look prettierConfluent server could respond with a status code of 401 to indicate the user’s authentication failed, status code 403 to indicate the user is not authorized to create the topic, or 200 to indicate success. Here is the response JSON payload upon success:
{
"kind": "KafkaTopic",
"metadata": {
"self": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic",
"resource_name": "crn:///kafka=7cteo6omRwKaUFXj3BHxdg/topic=my-cool-topic"
},
"cluster_id": "7cteo6omRwKaUFXj3BHxdg",
"topic_name": "my-cool-topic",
"is_internal": false,
"replication_factor": 0,
"partitions": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions"
},
"configs": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/configs"
},
"partition_reassignments": {
"related": "http://localhost:8090/kafka/v3/clusters/7cteo6omRwKaUFXj3BHxdg/topics/my-cool-topic/partitions/-/reassignment"
}
}
This indicates the resource was successfully created and gives detailed information about it.
The entire API reference is here, including many cURL examples: https://docs.confluent.io/platform/current/kafka-rest/api.html
GraphQL is an API query language that provides an alternative to REST. GraphQL is great because it gives clients a more flexible way to query an API for exactly what they need and provides developers an easier way to evolve their APIs while maintaining backward compatibility.
GraphQL specifies three execution operations:
Go to https://lucasconstantino.github.io/graphiql-online/ to play hands-on with a graphQL API.
Here is an example query that you can input into the GraphQL playground:
query{
countries(filter:{code:{regex:".*A.*"}}){
currency
code
name
continent{
name
}
}
languages (filter:{code:{regex:".*r"}}) {
native
name
code
}
}
This example illustrates how GraphQL gives the client the power to decide what data they want the server to return. With a REST API, making a GET request to a /countries endpoint would return a lot of information the client doesn’t necessarily need, and it would be up to the client to parse through the response. In addition, GraphQL empowers the client to choose data they want from a variety of resources in a single request. In this example, we also see that the client has requested information about languages. With a RESTful architecture, the client would have to make two requests, one to /countries and another to /languages.
Here are a few things to notice about the query:
(filter:{code:{regex:".*A.*"}})
,
we select only the countries whose country code contains an “A” using regular expressions.The GraphQL web editor, called GraphiQL, is sending an HTTP request to a real GraphQL endpoint. Here’s what an equivalent cURL command looks like (note the escape backslashes for double quotes and the lack of newlines in the query value):
curl -s -X POST -H "Content-Type: application/json" \
--data '
{"query": "query{countries(filter:{code:{regex:\".*A.*\"}}){currency
code name continent{name}}languages(filter:{code:{regex:\".*r\"}}
{native name code}}" }' [https://countries.trevorblades.com/
(https://countries.trevorblades.com/) | jq
Next, explore the API on your own. It’s especially helpful to use Ctrl+Space
to take advantage of auto-complete. Also don’t forget to explore the docs panel on the upper right. These docs are automatically generated from GraphQL’s type system (yay schemas and auto generated documentation!).
GraphQL has a “subscription” operation that goes well conceptually with Apache Kafka’s data streaming nature. Here is a blog post that goes into detail about how to create a GraphQL service with Apache Kafka, including a hands-on proof of concept application that uses ksqlDB as the streaming database layer on top of Apache Kafka.
Another popular alternative to REST API architecture is RPC, which stands for Remote Procedure Call. With an RPC API, the client sends a request to a server that asks the server to execute a “procedure” (i.e. “function”, or “method”).
RPC is different from a REST because the endpoint in a REST API is a resource like /v1/books/, whereas the endpoint in an RPC API is an action like /v1/book.update. In other words, REST APIs are centered around nouns whereas RPC APIs are centered around verbs.
RPC APIs are popular for synchronous communication between microservices (link to microservices glossary page). The most popular framework for developing RPC APIs is gRPC.
This example by Arnaud Lauret is useful to illustrate the difference between REST and RPC APIs:
Operation | RPC (operation) | REST (resource) |
---|---|---|
Signup | POST /signup | POST /persons |
Resign | POST /resign | DELETE /persons/1234 |
Read person | GET /readPerson?personid=1234 | GET /persons/1234 |
Read person's items list | GET /readUsersItemsList?userid=1234 | GET /persons/1234/items |
Add item to person's list | POST /addItemToUsersItemsList | POST /persons/1234/items |
Update item | POST /modifyItem | PUT /items/456 |
Delete item | POST /removeItem?itemId=456 | DELETE /items/456 |
Source: https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-rpc/#examples
Notice RPC APIs only use POST and GET HTTP methods, choosing to let the endpoint describe what kind of operation is taking place. Compare that to REST, where DELETE, GET, POST, and PUT methods to the same resource endpoint will result in different behaviors.
Slack is a popular enterprise collaboration app. Slack offers an RPC API to interact with its resources. Here is an example for setting a user’s profile data:
# Set user profile data
curl -X POST \
-H “Content-Type: application/json” -H “Authorization: Bearer my-super-secret-token” \
--data ‘{
"profile": {
"status_text": "riding a train",
"status_emoji": ":mountain_railway:",
"status_expiration": 1532627506,
"first_name": "John",
"last_name": "Smith",
"email": "john@smith.com",
"fields": {
"Xf06054BBB": {
"value": "Barista",
"alt": "I make the coffee & the tea!"
}
}
}
}’ https://slack.com/api/users.profile.set?user=W1234567890 | jq
The JSON data payload is taken straight from the example in Slack’s documentation. Notice the endpoint /api/users.profile.set is an action rather than a resource. Here just a few other action-oriented RPC endpoints related to users:
Slack API documentation pages for their endpoints (called “methods”, which makes sense for RPC):
All the API examples so far have been APIs over the HTTP protocol (REST, GraphQL, RPC), but another common way to use APIs is directly in the code of your own application through libraries.
A library is a collection of objects and functions that you can import into your own application to use. A library is typically tailored to solve a specific problem. Instead of solving that problem yourself, you take advantage of the library’s capabilities by using its API. This makes more sense as you dig into examples.
Here is a simple Python program that uses the API provided by the json library to serialize a Python dictionary into a json file.
import json
my_dictionary= {“my-key”:[1,2,3]}
with open(“my-file.json”, “w”) as f:
json.dump(my_dictionary, f)
Here are a few things to notice:
Apache Kafka provides APIs to read from, process, and write data to Kafka servers (called brokers). There are Kafka client libraries available in many programming languages. We’ll use Java to contrast with the previous Python example.
…
import import org.apache.kafka.clients.producer.*;
…
public class ProducerExample {
public static void main(final String[] args) throws IOException {
…
final Properties props = new Properties();
InputStream propsFile = new
FileInputStream("src/main/resources/producer.properties");
props.load(propsFile);
KafkaProducer<String, String> producer = new KafkaProducer<>(props);
String t = “my_topic”
String k = "mykey";
String v = "myvalue";
ProducerRecord<String, String> record = new ProducerRecord<String, String>(t, k, v);
producer.send(record);
}
The pattern is the same as the previous example:
In this case, we
Built by the original creators of Apache Kafka, Confluent offers a scalable data streaming platform with which you can build scalable, real-time, event-driven applications. By using events as the basis for connecting your applications and services, you benefit in many ways, including loose coupling, service autonomy, elasticity, flexible evolvability, and resilience.
You can use the APIs of Kafka and its surrounding ecosystem, including ksqlDB, for both subscription-based consumption as well as key/value lookups against materialized views, without the need for additional data stores. The APIs are available as native clients as well as over REST.
To learn more about data streaming and event-driven software architecture, check out Confluent Developer. Or, get started with Confluent on any cloud, on any scale in minutes. New users get $400 free to spend.
Excellent reference for all things HTTP, like headers, methods, status codes, etc.
The one-stop shop to learn about event-driven software architecture, Apache Kafka, and Confluent