# Blog

## 19 September 2025

### Chloe Finally Has GPT Integration! 🚀

Big news for all Chloe users: **Chloe now comes with GPT integration**! This means smarter, faster, and more intuitive AI responses directly in your Discord server

{% columns %}
{% column %}

<figure><img src="https://820240980-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FuYtpdSlfimQSIyNWOYKl%2Fuploads%2FWKMdr2V9SNzSfKVVkfNF%2FFrame%2012.png?alt=media&#x26;token=80b57d84-75c1-4b8a-bb30-b5a983022076" alt=""><figcaption></figcaption></figure>
{% endcolumn %}

{% column %}
Chloe now leverages **GPT integration** to provide smarter, context-aware responses in your Discord server. Whether it’s answering complex questions, assisting with Roblox Studio tasks, or moderating conversations, Chloe delivers fast and accurate AI-powered assistance tailored to your community.

**Key Benefits:**

* Understands context for more relevant answers 💡
* Adapts to your server’s style and needs 🎨
* Handles multiple tasks at once, saving time ⏱️
* Enhances community engagement with intelligent interactions 🌟

Experience a new level of AI support—Chloe doesn’t just respond, she understands.

{% endcolumn %}
{% endcolumns %}

<details>

<summary>Improved</summary>

* Context-Aware Answers
* Conversation history
* Fast response

</details>

***

## 11 November 2025

Hello everyone!\
As mentioned in a previous post, my team and I at **Polaris Integrations** are developing an AI chat-bot for Discord, built as a native integration (more commonly known as a “bot”). For those who want to catch up, you can find the initial announcement about *Chloe* here.

**What is Chloe**\
Chloe is a Large Language Model (LLM) based on OpenAI’s GPT technology. She is designed as a conversational assistant optimized for the Italian community on Discord.

**What “Powered by GPT” means**\
Saying that Chloe is *Powered by GPT* means that her intelligence is based on an OpenAI language model, while our team defines her behavior through a dedicated system file configured with various parameters, including:

* The AI’s name.
* Guidelines on how it should interact with users (for example: Chloe replies exclusively in Italian because she is meant for the Italian community).
* The AI’s purpose and manifesto.
* Creativity parameter (*temperature*). For Chloe, we set it to **0.3**, a value that prioritizes coherence and accuracy without removing creativity entirely. After extensive testing, this proved to be the best balance.

{% columns %}
{% column width="50%" %}

<figure><img src="https://820240980-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FuYtpdSlfimQSIyNWOYKl%2Fuploads%2FA2oqAZM8kGapBWFszEO9%2FChloe.png?alt=media&#x26;token=8ac8fc9e-ec6c-464b-abe4-8378494f69e6" alt=""><figcaption></figcaption></figure>
{% endcolumn %}

{% column width="50%" %}

### **Infrastructure Challenges (AI Self-Hosted)**

To run a custom AI like Chloe, we use a dedicated VPS where the entire model is executed through Ollama, without relying on external API services. This gives us full control, but makes the infrastructure more challenging to manage.

### **Why is self-hosting complex?**

**More powerful and expensive servers**\
An LLM requires significant resources, so a powerful cloud server dedicated exclusively to the AI is needed.

**Model optimization**\
Configuring the model (quantization, parameters, behavior) requires continuous testing to maintain a good balance between quality and speed.

**Load management**\
A single VPS cannot handle unlimited requests; traffic and limits must be monitored to avoid slowdowns.

**Stability and maintenance**\
Updates, monitoring, restarts, and VPS security are essential to keep the AI online at all times.

**Conclusions**\
The Chloe project represents an important step for us in developing fully customized AI solutions optimized for the Italian community. Working on the self-hosted infrastructure, model configuration, and Discord integration allows us to build a tool that is truly ours—controllable, reliable, and adaptable over time.

{% endcolumn %}
{% endcolumns %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://polaris-integrations.gitbook.io/polaris-integrations-docs/blog/readme.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
