Building an Intelligent Gantt Chart Maker with DHTMLX and AI

Delivering intuitive ways to work with project data becomes essential for modern applications. But it doesn’t seem like a trivial task, right? However, with the rise of AI technologies, you have the opportunity to achieve this goal using the DHTMLX Gantt component paired with a smart AI assistant.

In this tutorial, you’ll learn how to create a textual AI layer on top of an existing Gantt chart without rewriting the entire frontend. It will allow you to promptly configure and manage the Gantt chart with natural language instructions (prompts), while AI translates these instructions into true actions under the hood.

Key Elements of AI Gantt Maker and How They Work Together

First, let us briefly outline the integral parts of the AI Gantt Maker and how they work together.

The system’s architecture includes 3 main elements:

  • DHTMLX Gantt – the main project visualisation component
  • AI assistant – a simple chat UI for communicating with AI, i.e., issuing commands in plain English
  • Functions layer – a predefined set of operations (also known as function calling or tool calling schema) that are allowed to be utilised by AI models, thereby ensuring predictable and structured responses to user prompts.

Together, these elements enable users to create complete project structures, manage tasks and dependencies, adjust timeline settings, change visual styles, and export the Gantt chart via natural language commands.

For instance, here is how the AI Gantt Maker will respond to the command like “Create a project called Constructing a Fitness Club, including only the following three stages: Site Preparation, Construction Work, and Equipment Installation”:

Gantt Maker built with DHTMLX and AIOpen live demo >

To give you the full picture of how it all works as a single mechanism, here is the workflow overview:

  • The user enters a natural language instruction (prompt) into the chat UI.
  • The chat (AI assistant) sends the user’s message along with the current Gantt state to the backend.
  • The backend passes it with system prompt and tools to the selected AI service (LLM).
  • LLM responds with a structured function call related to the required operation.
  • The frontend analyses the response and updates the Gantt chart accordingly.

There are two last things we want to mention before proceeding to the implementation stage:

  • Technology stack. Our demo includes a React+Vite frontend, an Express+Socket.IO backend, and an LLM accessed via the OpenAI API (any other compatible AI service can be used as well). Everything is containerised with Docker.
  • AI model compatibility. The project works best with gpt-5-nano and gpt-4.1-mini, while gpt-4.1-nano has noticeable limitations in following the schema. If you decide to choose any other AI service, make sure it supports structured outputs (function calling).
  • Demo on GitHub. The complete, fully styled source code of the AI Gantt Maker project is available in the GitHub repository.

Without further ado, let’s explore what you need to do to build your own AI Gantt Maker.

Frontend Implementation

The frontend implementation lays the foundation for how users interact with the AI-assisted Gantt builder. At this stage, you need to initialize the DHTMLX Gantt component, add the AI assistant (chat UI) for prompt input, and establish a connection to the backend.

1. Initializing Gantt

The Gantt component is initialized once (main.js) and configured with columns, plugins (auto-scheduling, undo, export, markers, tooltip), date parsing, and zoom/fit helpers:

const gantt = Gantt.getGanttInstance();
gantt.config.columns = [
  { name: "wbs", label: "WBS", width: 60, resize: true, template: gantt.getWBSCode },
  { name: "text", label: "Task name", tree: true, width: 250, resize: true },
  { name: "start_date", align: "center", width: 100, resize: true },
  { name: "duration", align: "center", width: 80, resize: true },
  { name: "add", width: 40 },
];
gantt.plugins({
  auto_scheduling: true,
  undo: true,
  export_api: true,
  marker: true,
  tooltip: true,
});

gantt.config.auto_scheduling = true;
gantt.config.open_tree_initially = true;
gantt.config.auto_types = true;
gantt.config.scale_height = 60;
initZoom(gantt);
fitTaskText(gantt);

2. Setting Up the Chat UI

The chat UI (right pane) is created with a lightweight chat widget that provides an entry point for interacting with the AI assistant. To add the chat UI to your project, you need the initChat() method, which injects the chat structure and event listeners for sending messages:

// chat-widget.js - The core chat initialization function
export const initChat = ({ socket, runCommand, getProject }) => {
  // Injects the chat HTML structure into the page
  const chatWidgetContainer = document.querySelector('#chat_panel');
  chatWidgetContainer.innerHTML = `...`; // HTML for header, messages, input, etc.
  // Sets up event listeners for sending messages
  chatSubmit.addEventListener('click', function () {
    const message = chatInput.value.trim();
    sendUserMessage(message);
  });
  // ... other logic for suggestions and displaying messages
};

The chat UI is responsible for the following:

  • accepting natural-language input and sending it (with the current project state) via the WebSocket (Socket.IO) using the sendUserMessage() function
  • displaying AI messages and safely rendering Markdown content with the displayReply() function
  • offering context-aware suggestions for further user actions injected with the onCallback() function
  • providing the command guide with prompt examples

The full code for the chat widget can be found here.

3. Establishing Real-Time Communication with the Backend

Next, you need to set communication between the frontend and backend parts of your AI Gantt Maker, ensuring the delivery of commands to LLM and the corresponding Gantt updates in real time.

This connection is established using the Socket.IO library. It uses WebSockets as a communication channel between the frontend and backend.

const SOCKET_URL = import.meta.env.VITE_SOCKET_URL || `${window.location.origin}`;
const socket = io(SOCKET_URL);

This way, you will establish seamless, real-time collaboration between the Gantt chart, AI assistant, and AI service.

Backend Implementation: Connecting Gantt with AI

The backend serves as a traffic controller between the chat UI, the AI model, and the Gantt chart. To be more specific, it receives user prompts, prepares and sends requests to LLM, and returns structured commands that can be executed on the frontend.

Let us delve into the details and clarify the key aspects of backend responsibilities in the AI Gantt Maker.

1. Loading Allowed Operations (Tools) for AI

The backend provides the AI model with a set of tools that define the actions the AI model can perform with the required arguments. Each tool corresponds to a specific operation in the project: generating a project structure (generate_project), creating tasks (add_task), editing tasks (update_task), etc.

The OpenAI models support “function calling” mode, where each tool is described using a JSON schema. The model examines these schemas and, when appropriate, responds not with free-form text but with a structured function call such as:

{
  "tool": "add_task",
  "arguments": {
    "text": "Prepare design mockups",
    "start_date": "2025-03-10",
    "duration": 4,
    "parent": 12
  }
}

For instance, if the user inputs a request like “Add a task ‘Write documentation’ starting next Monday, duration 3 days, under phase 102.”, the add_task tool comes into play:

{
    type: "function",
    function: {
      name: "add_task",
      description: "Create a new task (optionally under a parent).",
      parameters: {
        type: "object",
        properties: {
          id: { type: ["string", "number"] },
          text: { type: "string" },
          start_date: { type: "string", format: "date", description: "ISO-8601 start date (e.g. 2025-05-01)" },
          duration: { type: "number", description: "Duration is always an integer" },
          parent: { type: ["string", "number", "null"], description: "Task ID to nest under, or null for root" },
        },
        required: ["text", "start", "end"],
      },
    },
  }

Each schema ensures that the AI produces structured responses compatible with the Gantt API, preventing invalid or unexpected commands.

2. Handling User Prompts

After receiving a prompt from the frontend via Socket.IO, the backend complements it with the system prompt (context+project data) and available tools (collected into a single array schemaList) and embeds all the information in a comprehensive API request. Then, the backend sends the request to the AI model (OpenAI API).

import { schemaList } from "./schemaList.js";

async function talkToLLM(request, project) {
  const messages = [
    { role: "system", content: generateSystemPrompt(project) },
    { role: "user", content: request },
  ];

The generateSystemPrompt(project) function creates a detailed context for the AI, including tasks, dependencies, and project rules.

Note: the current demo version lacks the conversation history feature, but it will be added soon.

For the complete list of available functions, including task dependencies, styling, chart configuration, and export operations, refer to the full schemaList.js file in the project repository.

3. Generation and Processing of the AI Model’s Response

After receiving the request from the backend, the AI model (LLM) analyzes which function from schemaList is best suited for a given task. After that, LLM returns a structured response with the function name and parsed arguments.

const res = await openai.chat.completions.create({
  model: "gpt-5-nano",
  reasoning_effort: "low",
  messages: messages,
  tools: schemaList,
});

Then, the backend extracts the function name and parameters from the AI response:

const msg = res.choices[0].message;
let content = msg.content;
let calls = msg.tool_calls;

Now, it is necessary to convert this data into a standardized format that the frontend can understand and execute. For this purpose, the backend wraps the LLM response into the tool_call event, which the frontend can listen to:

const toolCall = calls ? calls[0] : null;

  return {
    assistant_msg: content,
    call: toolCall
      ? JSON.stringify({ cmd: toolCall.function.name, params: JSON.parse(toolCall.function.arguments) })
      : null,
  };
}

The backend emits the tool_call event to the frontend using Socket.IO.

4. Frontend Execution and Gantt Updates

Once the backend returns the tool_call event in response to user prompts, the frontend needs to interpret and execute it to update the Gantt chart. On the frontend, the returned tool calls are handled by the command runner (runCommand function).

// Receiving and executing tool calls
socket.on('tool_call', (txt) => {
  let handled = false;
  try {
    const { cmd, params } = JSON.parse(txt);

    if (cmd && cmd !== 'none') {
      runCommand(cmd, params);
      hideLoader();
      onCallback(cmd, params);
    }
    handled = true;
  } catch (e) {
    hideLoader();
    displayReply(`Something wrong had happened: ${e.message}`);
    handled = true;
  }
  if (!handled) displayReply(`Couldn't handle this: ${txt}`);
});

The runCommand function acts as a dispatcher, mapping AI commands to specific DHTMLX Gantt API calls (command-runner.js):

export default function (gantt) {
  return function runCommand(cmd, args) {
    const strToDate = gantt.date.str_to_date("%Y-%m-%d");
    const dateToStr = gantt.date.date_to_str("%Y-%m-%d");

    switch (cmd) {
      case "add_task":
        gantt.addTask(args);
        break;


      case "delete_task":
        gantt.deleteTask(args.id);
        break;


      case "add_link":
        gantt.addLink({
          id: gantt.uid(),
          source: args.source,
          target: args.target,
          type: args.type,
        });
        break;
       
// ... handling other commands
    }
  };
}

As a result, corresponding Gantt API methods are executed, and the Gantt chart is immediately updated in accordance with the user’s request.

It puts together all the pieces of the interaction cycle implemented in the DHTMLX AI Gantt Maker. To find out more technical details, check out the GitHub repository of our demo project.

Note on Demo Limitations

The DHTMLX AI Gantt Maker demo suits well for demonstration purposes, but it should be mentioned that we’ve intentionally simplified several aspects of such integrations, which are likely to be an issue in real-world scenarios. For instance, our system currently does not track the conversation history, struggles to resolve ambiguous commands, and has a limited context size for large projects. Therefore, it is necessary to address these limitations when working on production-grade solutions.

Final Thoughts

In this tutorial, we explored how AI can be used to bring your project management experience with DHTMLX Gantt to a new level. Our demo shows how routine tasks that once kept you busy for hours can now be solved in minutes using simple instructions in plain English. While the current implementation is good enough to show the potential of using JavaScript UI components like DHTMLX Gantt with AI, there are plenty of opportunities to enhance the current solution for more complex tasks. For instance, you can expand the set of supported commands, add conversation history support, enrich functional capabilities with new operations (undo, auto-scheduling, etc.), integrate a more sophisticated chat widget, and more. With such improvements, your AI Gantt Maker can become a centerpiece of any modern project management app.

Advance your web development with DHTMLX

Gantt chart
Event calendar
Diagram library
30+ other JS components