Build ML models with GraphQL API with MindsDB and Grafbase

Query models, databases, integrations, and views in MindsDB from a GraphQL endpoint with Grafbase

·

22 min read

Build ML models with GraphQL API with MindsDB and Grafbase

Introduction

This article will explore leveraging Grafbase to query mindsdb with a GraphQL API. MindsDB is an AI Database that is used for creating and managing AI models and connecting various data sources for training. Grafbase is a framework to create and deploy GraphQL APIs with various data sources, resolvers, and connectors.

I have created a GraphQL API for interacting with the MindsDB REST API as a submission to the Hashnode x Grafbase hackathon. So shout out to both the communities that have given me such a great opportunity to create a project and showcase my skills.

Demo Video

💡

NOTE: The endpoint doesn't get data due to proxy fetch limitations

What is MindsDB?

MindsDB is a database or a service that allows us to integrate AI workflows and models with various integrations and data sources. It is quite a developer-friendly platform and provides a generous free tier.

MindsDB is an AI Database that manages AI models and automates workflows connecting AI models with enterprise databases

MindsDB setup

Setting up a Minds DB instance on the cloud is the easiest step to do, just head on to the Minds DB cloud and register with the email.

💡
MindsDB Cloud signup https://cloud.mindsdb.com

What is Grafbase

Well, grafbase is a tool that allows us to connect databases and APIs to a single GraphQL endpoint.

The intuition behind creating this project was to create a graphql endpoint to query mindsdb API to get the only desired responses. So let's look at what can be done with this graphql endpoint. You can do almost everything that is currently feasible with the REST API except a few nuances.

To get started with Grafbase, just sign up via Github on the Grafbase Platform. You can either start a project from scratch or import it from GitHub.

💡

Let's start with the basics of querying existing projects, models, databases, tables, and views. Then we will go into training models and creating views.

Databases

A database in MindsDB is like a data source, this can be any integration from typical databases, data lakehouses, and warehouses as well as applications. This is a foundational concept in Machine learning as we need large and structured data points to train the models and create views.

We will first create a type in the grafbase.config.ts for the database type which has name, engine, and type. We have also created an input type for authentication as mindsdbAuth with an email and password as required fields and host, and managed as optional fields if using self-hosted.

After defining the type and input for the database list operation, we can write the g.query type with the name databases and parameters like args, resolver, returns. The args is used for passing any arguments/input with the query. The resolver is the actual function/typescript module that will be triggered when the query is executed. The final parameter returns is used to specify what type of data is expected from the query. Here, the args is set as {auth} which is an inputRef to mindsdbAuth input type. The resolver is set as databases/list which will be a typescript file in the path grafbase/resolvers/databases/list.ts. Finally, the returns is set as g.ref(database).optional().list().optional() which is to say, it will return a list of database object type and it could be null(empty list, empty object) as well.

// grafbase/grafbase.config.ts


import { g, config } from '@grafbase/sdk'

const mindsdbAuth = g.input('Auth', {
    email: g.email(),
    password: g.string(),
    host: g.string().optional(),
    managed: g.boolean().optional(),
})

const database = g.type('Database', {
  name: g.string().optional(),
  type: g.string().optional(),
  engine: g.string().optional(),
})

g.query('databases', {
    args: {auth: g.inputRef(mindsdbAuth)},
    resolver: 'databases/list',
    returns: g.ref(database).optional().list().optional(),
})


// only once in the grafbase.config.ts

export default config({
  schema: g
})

So, this is the type definition of the query databases. Now we need to create the resolver for this which will be on the path grafbase/resolvers/databases/list.ts

// grafbase/resolvers/databases/list.ts

import {login} from "../../connect";

export default async function Databases(_, { auth }) {
    try {
        const headers = await login(auth);
        const baseUrl = auth.host || process.env.BASE_URL;

        const databasesResponse = await fetch(`${baseUrl}/api/databases`, {
            headers: headers,
            credentials: 'include'
        });

        if (databasesResponse.status !== 200) {
            throw new Error('Failed to fetch databases');
        }

        const databasesData = await databasesResponse.json();
        return databasesData;
    } catch (error) {
        console.error(error);
        return [error];
    }
}

query Databases(
  $email: Email!
  $password: String!
) {
  databases(
    auth: {
      email: $email
      password: $password
    }
  ) {
    name
    type
    engine
  }
}

In the above typescript snippet, we have exported a function Databases which takes in auth as the args. The auth is the mindsdbAuth type which has credentials and settings for how to authenticate to the minds db instance. The Databases method first calls and awaits the login method which is imported from the ../../connect module. We will see the login method next. But first let's assume, the login method gives in the headers from the logged-in session. We move ahead and create another request with the endpoint as baseUrl/api/databases which as a GET request will return a list of databases.

The fetch request sends a GET request to the endpoint sets the header as the header obtained from the login method, and also sets credentials to include so that we will be able to include cookies from the provided headers. This should be returning a 200 code and we return the JSON response from the method we throw out the error.

Now, let's look into the login method. This is an extensively used method in this GraphQL endpoint, so we need to look at it just once and it repeats for all the requests, queries, and mutations.

// grafbase/connect.ts


export const login = async (input) => {
    try{
        const email = input.email;
        const password = input.password;
        const baseUrl = input.host || process.env.BASE_URL;

        // Login and retrieve cookies
        const loginResponse = await fetch(`${baseUrl}/cloud/login`, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
            },
            credentials: 'include',
            body: JSON.stringify({
                email,
                password,
            }),
        });

        if (!loginResponse.ok) {
            throw new Error('Login failed');
        }

        // Extract cookies from login response
        const cookies = loginResponse.headers.get('set-cookie');
        const cookieArray = cookies ? cookies.split(', ') : [];
        const headers = new Headers();
        headers.append('Cookie', cookieArray.join('; '));
        return headers;
    } catch (error) {
        console.error(error);
        return null
    }
}

The login method takes in the auth which has email, password, host, and managed fields. We first set url which I am currently setting to the cloud but it could be even the local instance. In the local instance, you won't need to log in.

If we are using the cloud instance, however, we set the endpoint as baseUrl/cloud/login which will be a POST request, the body will be set as the email and password. The api endpoint will return nothing, but the headers set in this response are authenticated and we want those to use it in further requests. So, we will extract the set-cookie field from the headers and split them on ,. This will give us all the cookies set from the authenticated response. We will create a new Headers() instance and set the Cookie field as a list of cookies from the cookieArray.

So, that is the basic logic of interacting with the minds db API and its authentication flow.

Let's test the databases query in the PathFinder or your GraphQL client of choice.

query Databases(
  $email: Email!
  $password: String!
) {
  databases(
    auth: {
      email: $email
      password: $password
    }
  ) {
    name
    type
    engine
  }
}

Set the variables as { "email": "abc@def.com", "password": "supersecret" } in the client interface.

Similarly, we can create a query to obtain a single database but that's quite repetitive, so we can extend the single query with optional arguments to allow filtering and fine-tuning of the responses in the resolvers.

Create and Delete Database/data source

We can even create mutation i.e. changing the data. We can create/update/delete databases in the mindsdb instance by providing parameters like the name, engine and parameters (specific to integration database) on creation, name For deleting the database, engine and parameters for updating a particular database with the given name

// grafbase/grafbase.config.ts
...

const databaseCreateInput = g.input('DatabaseCreateInput', {
  name: g.string(),
  engine: g.string(),
  parameters: g.json().optional(),
})

g.mutation('databaseCreate', {
  args: { auth: g.inputRef(mindsdbAuth), input: g.inputRef(databaseCreateInput) },
  resolver: 'databases/create',
  returns: g.ref(database).optional(),
})

...

In the above snippet, we have added the databaseCreateInput as the input type which has name, string as required fields. The parameters is an optional field as a json because we need to parse the properties like host, port, db_file which are specific to the integration/data source providers.

We have created a mutation, that is quite similar to a query, except we will be taking the input data databaseCreateInput and returning a single instance of database reference.

We will implement the resolver database/create in the grafbase/resolvers/databases/create.ts file as follows:

// grafbase/resolvers/databases/create.ts


import {login} from "../../connect";

// input for creation
export default async function DatabaseCreate(_, {auth, input }) {
    try {
        const headers = await login(auth);
        headers.append('Content-Type', 'application/json');
        const baseUrl = auth.host || process.env.BASE_URL;

        const body = JSON.stringify({
            database:{
                name: input.name,
                engine: input.engine,
                parameters: input.parameters
            }
        })

        const databaseResponse = await fetch(`${baseUrl}/api/databases`, {
            method: 'POST',
            headers: headers,
            body: body,
            credentials: 'include'
        });

        if (databaseResponse.status !== 201) {
            throw new Error('Failed to create database');
        }
        const database = await databaseResponse.json();
        return database;
    } catch (error) {
        console.error(error);
        throw new Error('Error creating database');
        return null
    }
}

The above resolver funciton takes in the auth parameter as usual for authenticating the user, and the second parameter is the databaseCreateInput which has name, string, and parameters fields. We add the header with Content-Type: application/json to receive an json object from the API. The body is constructed with the wrapper of database outside the name, string, and parameters fields. We send the POST request to the /api/databases endpoint and check on the status code 201 to indicate whether the database is created or not. We return the json representation of the newly created database.


mutation DatabaseCreate(
  $email: Email!
  $password: String!
) {
  databaseCreate(
    auth: {
      email: $email
      password: $password
    }
    input: {
      name: "testdb1"
      engine: "sqlite"
      parameters: { db_file: "test.db" }
    }
  ) {
    name
    engine
    type
  }
}

The above mutation query is a named query for databaseCreate with a name DatabaseCreate it can be anything. We parse the variables for the auth. The input field is an object that contains the name, engine, and the parameters fields. Finally, we fetch the required attributes among name, engine, and type of the database.

Similarly, we can create a update and delete mutation for updating and deleting the database from the MindsDB instance.

The update request takes in the URL as the name of the database and doesn't contain the name in the body, If we include it won't change the name anyways anyways. The method for the fetch request will be POST.

For the delete request, the URL is appended as the name of the database to delete, and there is no requirement for the input. Since we are deleting the database, we just specify the name of the database in the URL and send a DELETE request.


const databaseUpdateInput = g.input('DatabaseUpdateInput', {
  engine: g.string().optional(),
  parameters: g.json().optional(),
})


g.mutation('databaseUpdate', {
  args: { auth: g.inputRef(mindsdbAuth), databaseName: g.string(), input: g.inputRef(databaseUpdateInput) },
  resolver: 'databases/update',
  returns: g.ref(database).optional(),
})

g.mutation('databaseDelete', {
  args: { auth: g.inputRef(mindsdbAuth), databaseName: g.string() },
  resolver: 'databases/delete',
  returns: g.string().optional(),
})


// name, input for updation 
NOTE: input is only {engine, parameters} can't change the name

export default async function DatabaseUpdate(_, {auth, databaseName, input }) {}
//change url to /databases/${databaseName}
// change method as "PUT"
}


// name for delettion

export default async function DatabaseDelete(_, {auth, databaseName }) {
//change url to /databases/${databaseName}
// change method as "DELETE"
}

Projects

In Minds DB, a project is a self-contained unit containing a trained model and related data to make predictions for one machine learning task. There could be multiple models and views in a project. It is just database but it is interchangeably used with projects and databases. It might be different, but in the future, it might have good documentation and working REST API, Right now it is a work in progress from the minds db docs.

In the Projects API or documentation right now, it only has one field as name. So, we will try to add those in the future as this gets implemented in the MindsDB API.

// grafbase/grafbase.config.ts


import { g, config } from '@grafbase/sdk'

const project = g.type('Project', {
  name: g.string(),
})

g.query('projects', {
    args: {auth: g.inputRef(mindsdbAuth)},
    resolver: 'projects/list',
    returns: g.ref(project).optional().list().optional(),
})

So, this is simply used to set the project type with a field name as string. We create a query projects to give out a nullable list of project objects.

Tables

So, let's move into the tables which are the tables in the database we interacted with before. The tables can be of different types like data, model, view. So we have a type of table which will be set with the fields of name and type for a particular database in the MindsDB instance.

// grafbase/grafbase.config.ts


const table = g.type('Table', {
  name: g.string(),
  type: g.string(),
})


g.query('databaseTables', {
    args: {auth: g.inputRef(mindsdbAuth), databaseName: g.string()},
    resolver: 'tables/list',
    returns: g.ref(table).optional().list().optional(),
})

So, here we have created the type table with 2 string fields name and type. The query databaseTables is used to take in the databaseName and list the tables. The resolver is almost the same except we change the API endpoint to the /api/databases/{databaseName}/tables.


query DBTables(
  $email: Email!
  $password: String!
){
  databaseTables(auth: {
    email: $email
    password: $password
  }
  databaseName: "test_sdk"
  ){
    name
    type
  }
}

The query as said takes in the auth and databaseName and can list the table name and type.

{
  "data": {
    "databaseTables": [
      {
        "name": "models",
        "type": "data"
      },
      {
        "name": "models_versions",
        "type": "data"
      },
      {
        "name": "jobs",
        "type": "data"
      },
      {
        "name": "jobs_history",
        "type": "data"
      },
      {
        "name": "model_a",
        "type": "model"
      },
      {
        "name": "model_b",
        "type": "model"
      },
      {
        "name": "test_view",
        "type": "view"
      },
      {
        "name": "view_dj_ai",
        "type": "view"
      },
      {
        "name": "view_dj_devops",
        "type": "view"
      }
    ]
  }
}

So, we can see the types present in the database tables being either data, model or a view.

Models

We can now move into the models which are the actual ML models that can be trained on the data and can produce results. The model type has fields like name, status, active, prediction, etc. which are usually the metrics and settings for the model to train on.

// grafbase/grafbase.config.ts

const model = g.type('Model', {
    name: g.string(),
    status: g.string(),
    active: g.boolean(),
    prediction: g.string().optional(),
    trainingTime: g.string().optional(),
    accuracy: g.float().optional(),
    createdAt: g.string().optional(),
    error: g.string().optional(),
    fetchDataQuery: g.string().optional(),
    mindsdbVersion: g.string().optional(),
    update: g.string().optional(),
})

g.query('projectModels', {
    args: { auth: g.inputRef(mindsdbAuth), projectName: g.string()},
    resolver: 'models/list',
    returns: g.ref(model).list().optional(),
})

The above type definition and query are used to define the Model representation in the GraphQL API. The type model has fields like:

  • name as string

  • status being the state of the model

  • active being usable or not it might be in the training process at that point it might be false

  • prediction As the name of the column, the model is trying to predict

  • training_time as the duration the model took to train

  • accuracy as the floating value between 0 and 1

And so on. But there is one more complex field that is not here and nor in the documentation. The problem_definition which actually is used to specify the model its settings and what to work on. It has sub-fields like target and using. The target is used to specify the column name used for the model to look for results and the using is a JSON object which is an integration or data-specific setting used to train the model.

query Models(
  $email: Email!
  $password: String!
) {
  projectModels(
    auth: {
      email: $email
      password: $password
    }
    projectName: "test_sdk"
  ) {
    name
    status
    active
  }
}

Let's dry-run the query to get the list of models in the given project. The fields can be changed as per the requirements.

Training Model

Let's get into the actual meat of the API, to train the model i.e. to create and make the data available to it. We first need to figure out the requirements to create the model. In MindsDB, a model is a combination of columns from the integration, your data, input, and the integration engine.

This Model documentation is quite extensive in exploring the various types of models. The general format is to pick up an integration, use those columns as the actual data, optionally add in your data, provide the input data, and predict or use the model.

So, we can now go ahead and start creating the model using a mutation.

// grafbase/grafbase.config.ts

...

const modelCreateInput = g.input('ModelCreateInput', {
  name: g.string(),
  engine: g.string(),
  predictColumn: g.string(),
  parameters: g.json(),
})

g.mutation('modelCreate', {
  args: {
    auth: g.inputRef(mindsdbAuth), 
    projectName: g.string(), 
    modelData: g.inputRef(modelCreateInput)
  },
  resolver: 'models/create',
  returns: g.ref(model).optional(),
})

...

So, we have created a input for creating a model as modelCreateInput which will take in name As the name of the model, engine As the engine/integration used by the model, predictColumn as the column name that the model will try to predict, and finally, everything that goes in the settings as the parameter This is engine/integration specific so it is used as json field.

The mutation modelCreate takes in 3 parameters, auth as usual, projectName the name of the project where the models will be created, modelData as the modelCreateInput. The mutation will return a single nullable model type object.

The resolver can be written as follows in the grafbase/resolvers/models/create.ts:

// grafbase/resolvers/models/create.ts


import {login} from "../../connect";

export default async function ModelCreate(_, { auth, projectName, modelData }) {
    try {
        const headers = await login(auth);
        headers.append('Content-Type', 'application/json');
        const baseUrl = auth.host || process.env.BASE_URL;
        const url = `${baseUrl}/api/projects/${projectName}/models`;
        let parameters = "";
        for (const [key, value] of Object.entries(modelData.parameters)) {
            parameters += `${key} = '${value}', `;
        }
        parameters = parameters.slice(0, -2);

        const query = `CREATE MODEL ${projectName}.${modelData.name} PREDICT ${modelData.predictColumn} USING ENGINE = '${modelData.engine}', ${parameters};`

        const body = JSON.stringify({
            query: query
        })
        const modelsResponse = await fetch(url, {
            method: 'POST',
            headers: headers,
            credentials: 'include',
            body: body
        });

        if (modelsResponse.status !== 201) {
            throw new Error('Failed to create model');
        }
        const modelsData = await modelsResponse.json();
        return modelsData;
    } catch (error) {
        console.error(error);
        return [error];
    }
}

It turns out the API takes in a single query from the /models endpoint to create a model on POST request. So, we have to do a little work, instead of telling the user to write the query, we will just prompt him with blanks to fill like the engine name, predict column, and the additional parameters. So, we parse name, predictColumn, and engine as it is. However, for parameters, we need to do a little work.

The SQL statement is of the form

  • CREATE MODEL project_name.model_name

  • PREDICT column

  • USING engine = engine, params;

Here the parameters cannot be JSON, they have to be of the form:

k1 = v1, k2 = v2, k3 = k3

Hence, we split them and do some string manipulation to achieve just that.

This looks good, Now let's create a model with graphql.

mutation CreateModel(
  $email: Email!
  $password: String!
){
  modelCreate(auth: {
    email: $email
    password: $password
  }
  projectName: "test_sdk_1"
  modelData: {
    name:"randomjoke"
    engine: "openai"
    predictColumn: "response"
    parameters: {
      model_name:"gpt-3.5-turbo",
      prompt_template: "give a random {{name}} joke"
    }
  })
  {
    name
  }
}

The above mutation is named as CreateModel which is modelCreate that takes in the auth, project name as the project in which we would like to create this model, and the model data which has 4 fields, wiz. name, engine, predictColumn, and parameters.

We have used the openai engine and called the model as randomjoke in the project test_sdk_1, The predictColumn is set as the response since the openai engine stores the results in the response column. The parameters part is specific to this engine, that is it takes the model_name and the prompt_template. The {{name}} is a placeholder and is used to indicate the input while using the model. That is all it takes a simple model in MindsDB.

So, we can then get whatever we require from the mutation, keep in mind it might take a few minutes or seconds to train the model depending on the complexity. So, the fields might be null in the response till the model is trained.

Using the trained model

Since we have a trained model, we can use the model to query the results of the inputs given to it. The predict endpoint is used to get results from the trained model. So let's convert that endpoint into a graphql mutation.

Let's create an input type for modelQuery which will take in the projectName, modelName and the data which will act as the JSON input to the model.

The mutation will take in auth and query as modelCreateInput to return a json object from the model.


const modelQueryInput = g.input('ModelQueryInput', {
    projectName: g.string(),
    modelName: g.string(),
    data: g.json(),
})

g.mutation('modelQuery', {
  args: {
    auth: g.inputRef(mindsdbAuth),
    query: g.inputRef(modelQueryInput)
  },
  resolver: 'models/query',
  returns: g.json().optional(),
})

The resolver models/query will fetch the endpoint /predict with a POST request, a body with a single data field. The API might return multiple objects, so it is a list, but parsing the list from the JSON object was not feasible so I have picked up only the first JSON result, which is fine in most cases, you need to specify the output if you want a more fine-tuned output.

import {login} from "../../connect";

export default async function ModelQuery(_, { auth, query }) {
    try {
        const headers = await login(auth);
        headers.append('Content-Type', 'application/json');
        const baseUrl = auth.host || process.env.BASE_URL;
        const url = `${baseUrl}/api/projects/${query.projectName}/models/${query.modelName}/predict`;

        const body = JSON.stringify({
            data: query.data
        })
        const modelsResponse = await fetch(url, {
            method: 'POST',
            body: body,
            headers: headers,
            credentials: 'include'
        });

        if (modelsResponse.status !== 200) {
            throw new Error('Failed to query model');
        }
        const modelsData = await modelsResponse.json();
        const resp = modelsData[0];
        return resp;
    } catch (error) {
        console.error(error);
        return [error];
    }
}

So, below is the mutation to query the model randomjoke with the {{name}} as star wars and it gives the desired output.


mutation ModelQuery(
  $email: Email!
  $password: String!
) {
  modelQuery(
    auth: {
      email: $email
      password: $password
    }
    query: {
      projectName: "test_sdk_1"
      modelName: "randomjoke"
      data: {
        name: "star wars"
      }
    }
  )
}

And here's the result, an AI-generated Star Wars joke.

{
  "data": {
    "modelQuery": {
      "response": "Sure, here's a random Star Wars joke for you:\n\nWhy did Anakin Skywalker cross the road?\n\nTo get to the Dark Side!"
    }
  }
}

So, that is what I have created to write a GraphQL API for interacting with models in the MindsDB API.

Views

Views are used to query models in a friendly SQL way. You provide the query and the model name and the result is returned from the model. They are also referred to as a, it is a combination of the data and the predictor model. You can also say that they view would only fetch the result from the /predict endpoint. So, we can use the general SQL query endpoint to get the result of the view.

A view is usually used to store a particular input or set of inputs for finetuning and generating an AI output. We can simply query the AI response with a single SQL statement.


const view = g.type('View', {
  id: g.int().optional(),
  name: g.string(),
  query: g.string(),
})

The view simply has the name and the query to run against, just like we used the variable name we can pass it here to fix the input for the view.

query Views(
  $email: Email!
  $password: String!
) {
  projectViews(
    auth: {
      email: $email
      password: $password
    }
    projectName: "test_sdk"
  ) {
    name
    query
  }
}

So the above query will simply list all the views.

If we want to create a view which is the main crux of this type of creation, we will create a view create input, and wire up the mutation to input it.


const viewCreateInput = g.input('ViewCreateInput', {
  name: g.string(),
  query: g.string(),
})

g.mutation('viewCreate', {
  args: { 
    auth: g.inputRef(mindsdbAuth),
    projectName: g.string(),
    viewData: g.inputRef(viewCreateInput)
  },
  resolver: 'views/create',
  returns: g.ref(view).optional(),
})

So the viewCreateInput is used for storing the type definition for the input of the view creation and we create a mutation viewCreate that takes in the auth, projectName and the viewData.

The resolver is stored views/create which is quite similar to the modelCreate resolver. We'll look at the query now since it is what will be important.


mutation CreateView(
  $email: Email!
  $password: String!
){
  viewCreate(auth: {
    email: $email
    password: $password
  }
  projectName: "test_sdk_1"
  viewData: {
    name:"jokeview"
    query: "SELECT response FROM test_sdk_1.randomjoke WHERE name='starwars'"
  })
  {
    name
  }
}

This will create a view that can be quickly used to get a response to the statement SELECT * FROM test_sdk_1.jokeview;

SQL Query endpoint

The SQL query endpoint is a general endpoint to post a SQL query from the mindsdb api to the instance.



const queryResponse = g.type('QueryResponse', {
  context: g.json().optional(),
  type: g.string().optional(),
  columnNames: g.string().optional().list().optional(),
  data: g.json().optional(),
})

g.mutation('sqlQuery', {
  args: {
    auth: g.inputRef(mindsdbAuth),
    query: g.string()
  },
  resolver: 'databases/query',
  returns: g.ref(queryResponse).optional(),
})

The mutation sqlQuery is used to query a MindsDB instance, so we take in auth the query input and return a queryResponse type object. This queryResponse is a type that has data as JSON, column names as a list of strings representing the column names. The type of the result is obtained from i.e. table or any other format, and the context represents the database on which the query was executed.

// grafbase/resolvers/databases/query.ts


import {login} from "../../connect";

export default async function SqlQuery(_, { auth, query  }) {
    try {
        const headers = await login(auth);
        headers.append('Content-Type', 'application/json');
        const baseUrl = auth.host || process.env.BASE_URL;
        const body = JSON.stringify({
            query: query
        })
        const queryResponse = await fetch(`${baseUrl}/api/sql/query`, {
            method: 'POST',
            headers: headers,
            body: body,
            credentials: 'include'
        });

        if (queryResponse.status !== 200) {
            throw new Error('Failed to fetch query');
        }
        const queryData = await queryResponse.json();
        return queryData;
    } catch (error) {
        console.error(error);
        return [error];
    }
}

The resolvers are quite simple for sending a POST request on /api/sql/query with the query string as a field in the body.

Below is the mutation sqlQuery used to obtain results for a general query.


mutation SqlQuery(
  $email: Email!
  $password: String!
){
  sqlQuery(auth: {
    email: $email
    password: $password
  }
  query: "SELECT * FROM test_sdk_1.jokeview;"
  ){
    type
    data
  }
}

Alternate Authentication method with headers

If you don't want to include the authentication credentials in the query or mutation, you can use the headers and the context variables from grafbase.

First, in the headers section, create a header like any relevant header you like for example, Authorization.

Grafbase Resolver Context

import {login} from "../../connect";

export default async function ProjectModels(_, { projectName }, { request }) {
    try {
        const {headers} = request;
        let credentials = headers['authorization'].split(';');
        const auth = {
            email: credentials[0],
            password: credentials[1]
        }
        const cookies = await login(auth);
        const baseUrl = process.env.BASE_URL;
        const url = `${baseUrl}/api/projects/${projectName}/models`;

        const modelsResponse = await fetch(url, {
            headers: cookies,
            credentials: 'include'
        });

        if (modelsResponse.status !== 200) {
            throw new Error('Failed to fetch models');
        }

        const models = await modelsResponse.json();
        return models;
    } catch (error) {
        console.error(error);
        return [error];
    }
}

For this, you would need to add the header Authorization into the grafbase Path Finder or any other GraphQL client as the key name and the value as email;password. We first get the context variable from the third parameter in the resolver {request} and then further get the {headers} from the request object. This gives us access to the headers in the grafbase PathFinder client. We can then use those headers to perform the authentication to the mindsdb api and run the appropriate queries with that context. The headers from the login i.e. the cookies are set in the following request to make the authenticated queries

TIP: If you are stuck at debugging locally use:

grafbase -t 1 dev

This will allow to show trace of the dev server.

So, that is how we can use grafbase with Midnsdb to query the ML models, views, projects, and various integration engines and databases.

Link to Github Project

Grafbase-Mindsdb

I have included some example queries and mutations in the examples folder of the GitHub repository.

References:

Conclusion

So, from this comprehensive article and a project demonstration, you would have found it insightful to look into mindsdb and grafbase. I love working with grahQL APIs and recently I have been working with MindsDB to create the golang client, so I have a better view of the API now. Working with Grafbase is so powerful, if you know typescript and know the crux of GraphQL it would be a breeze to set up a GraphQL endpoint.

Thank you for reading and hopefully you found it helpful to get started with grafbase and also understand the MindsDB API.

Happy Coding :)