Nebula icon indicating copy to clipboard operation
Nebula copied to clipboard

🤖 AI Integration Feature Ideas

Open chrisblakley opened this issue 1 year ago • 16 comments

This is just a ticket to track browser AI support such as Gemini in Chrome and log ideas of feature integration.

Nebula demo here using Gemini in Google Chrome: https://nebula.gearside.com/examples/chrome-ai-apis/

Quick note on how to activate it during the development phase:

  • Go to chrome://flags to enable the various Gemini AI flags as well setting the "Enables optimization guide on device" flag to Enabled BypassPerfRequirement.
  • Restart Chrome
  • Go to chrome://components to install/update the "Optimization Guide On Device Model" component

If you don't see the "Optimization Guide On Device Model" listed in the Chrome Components list, try running one of the AI commands in the console (ignoring any errors) and reload the components page.

Potential ideas for AI integration will be in separate comments below for now. May eventually separate them into their own individual issues if this feature becomes that large/supported.

chrisblakley avatar Nov 04 '24 14:11 chrisblakley

Maybe the 404 page suggestions can be further enhanced with AI if it can be fed the sitemap?

However, I would not want the AI to recommend things to the user that aren't 100% accurate- which is a problem with AI. So each recommendation would need to be programmatically reviewed and sanitized...

Feasibility: 🤖/5

chrisblakley avatar Nov 04 '24 14:11 chrisblakley

Use the Summarizer AI to summarize contact form submissions.

In the Nebula CF7 Submission storage section, when clicking on a submission we could have the AI summarize the request. This could speed up the review process.

This could also provide another perspective on if the submission is spam. Maybe have the AI return a one-word response to "Is this message spam?" from a choice of "Definitely", "Likely", "Inconclusive", "Unlikely", "No".

Furthermore, this AI could parse the latest X submissions (like latest 10 or latest 50) and summarize them. Perhaps a good starting point here is the Nebula Feedback system. Maybe a "Summarize" button at the top? How do we deal with uncaught spam and long submission data eating up tokens?

Feasibility: 🤖🤖🤖🤖🤖/5

chrisblakley avatar Nov 04 '24 14:11 chrisblakley

When using Nebula Audit Mode (or maybe in the WP admin when editing a page/post/CPT), use AI to try writing a meta description for the page content.

This could give content managers a chance to see alternative approaches to their meta descriptions.

The challenge is that AI does not always understand things like "no more than 150 characters".

Feasibility: 🤖🤖/5

chrisblakley avatar Nov 04 '24 14:11 chrisblakley

Create a new AI metabox in the WP Dashboard that can be prompted like ChatGPT or Gemini itself.

It wouldn't know all that much about the current website (unless there was a massive system prompt), and also would not be as feature rich as ChatGPT itself because it is not worth reinventing the wheel here.

Feasibility: 🤖🤖/5

Not going to pursue this unless there is a way in the future to enhance the system prompt to make responses more useful to developers and content managers using the WordPress DB.

Edit: This could become much more useful if integrating the OpenAI API as described below.

✅ This has been created in Nebula Dev, but not yet pushed to Nebula Core.

chrisblakley avatar Nov 04 '24 14:11 chrisblakley

Also check out transformers.js as it has entirely client-side AI capabilities with a super easy to use API that can do all sorts of functionality from many different models.

https://huggingface.co/docs/transformers.js/tutorials/vanilla-js

Available models: https://huggingface.co/models?library=transformers.js&sort=trending

https://github.com/huggingface/transformers

chrisblakley avatar Nov 14 '24 14:11 chrisblakley

Another idea for a Nebula AI feature could be to feed AI daily error logs and have it summarize them and provide fixes/suggestions.

Feasibility: 🤖🤖🤖/5

Edit: This would be another great functionality for the OpenAI API described below in the next comment.

chrisblakley avatar May 18 '25 01:05 chrisblakley

Another method could be to use the OpenAI API on the PHP backend to send prompts: https://platform.openai.com/docs/quickstart?api-mode=responses

There are token costs with this method, so I would want to incorporate safeguards:

  • Set a Hard Monthly Usage Limit
  • Track Usage Programmatically
  • Estimate Token Use Before Sending a Request (would need to figure out how to do this on a pageload– maybe these features would go through JavaScript AJAX but be run from PHP so it can go back and forth?)
  • Use GPT-3.5 model
  • Maybe Log All Prompts and Costs Locally
  • Build a “Kill Switch” (php constant and/or nebula option)

chrisblakley avatar May 18 '25 02:05 chrisblakley

Edit: ✅ Completed this one. It is in Nebula Dev but not push to Nebula Core yet as of May 2025. Update: committed to Nebula May 2025.

Feature idea: Code review various functions (PHP or JavaScript) from the child theme. One idea could be to pick a function at random and review it, but another angle could be if a file was edited recently to pick a function from that file specifically.

An optional idea here could be to note timestamps of function reviews so it doesn't review the same function multiple times.

Maybe each day a different function is reviewed?

Feasibility: 🤖🤖🤖🤖🤖/5

I had ChatGPT estimate how long it would take to spend $5 in tokens if I had it review a different function every 2 hours (between 8am and 6pm): Image

This would still add up... Maybe limit it to 1 function per day which would take about a year to spend $5...

Steps to complete this feature:

  • Create a Nebula Option for OpenAI API key
  • Create an "Enable AI Features" Nebula Option as well as listen for a killswitch constant
  • Obtain an OpenAI API key
  • Write the PHP dashboard metabox
  • Check if we have a cached response for today already and output that if so
  • Write a function that scans the child theme PHP and JS files for functions (and stores them in a way where one can be selected at random)
    • For my documentation website, I'd want this to scan the parent theme files, so I'd need a hook to override this
  • Once I have a random function, prep the prompt and request token usage from the OpenAI API <-- Remote Request
  • If the token usage is less than whatever limit I set, run the prompt <-- Remote Request
  • Also obtain from OpenAI the token usage data so we can show the available tokens (and current costs) <-- Remote Request
  • Cache the response so that it cannot run again until like 4am the next day
  • Write the function that outputs the ChatGPT response into the WP dashboard metabox
  • Obtain a link to continue the conversation in ChatGPT.com

Note to self: during development, have it just return a "fake" API response that doesnt even listen to the question prompt so that I don't eat up tokens while writing this functionality.

chrisblakley avatar May 18 '25 02:05 chrisblakley

For any server-side prompt-based OpenAI integration, I would want the server to do the initial prompt/response, but then any follow-up the user can perform on their own account rather than embedded in the Nebula WordPress interface (and eating up API tokens).

Image

chrisblakley avatar May 18 '25 02:05 chrisblakley

Image

For now, I've implemented these prompt builders for post content generation (title, meta description, and content). These do not require an AI and don't use tokens– they simply generate a prompt and copy it to the clipboard.

Eventually these can be enhanced with true native AI generation responses which will require an API key and token usage.

chrisblakley avatar May 21 '25 17:05 chrisblakley

From Google I/O, Gemini in Chrome will ship in version 138:

Image

So the process for generating post prompts can be as follows:

  • If the OpenAI API Key exists, send the request to ChatGPT (need to develop this still)
  • Otherwise, if the Gemini API exists in the browser, use that on-device model to generate these
  • Lastly, if neither of those are available, fallback to the current system of copying the prompt to the clipboard and opening ChatGPT

chrisblakley avatar May 23 '25 20:05 chrisblakley

Looks like the Prompt API won't be available until Chrome v139...

Image

chrisblakley avatar Jun 26 '25 13:06 chrisblakley

Content generation Nebula block is working now:

https://github.com/user-attachments/assets/2b5d14c9-6242-4092-9f14-79e02542a21e

An additional idea here could be to have a system prompt where the user can type out a summary of the post itself where all of the Nebula AI blocks can reference when generating their individual content.

chrisblakley avatar Aug 12 '25 15:08 chrisblakley

Another idea could be to have a button in the page/post "global" sidebar that is an AI reviewer. It can review the entire post content for things like SEO and make recommendations.

Edit: I added this as a "Prompt Cookbook" item that links a pre-made prompt to ChatGPT since that will be a more effective way of doing this than using tons of tokens and not having great formatting on the response.

chrisblakley avatar Aug 13 '25 19:08 chrisblakley

Just noting that "Prompt Cookbook" has been changed to "Prompt Launchpad" and I am going to keep it as a ChatGPT outbound link feature- I think that interface is much better suited for these kinds of prompts.

I may decide to change the interface from having 4+ buttons (one for each prompt) to have it be 1 dropdown and 1 button.

chrisblakley avatar Aug 15 '25 13:08 chrisblakley

Another idea would be to create a user field that lets each user select which AI chat platform they prefer to use for Prompt Launchpad features. Each user could then choose between ChatGPT (default), Gemini, Claude, or others without affecting the choice for other users.

Edit: This has been implemented.

chrisblakley avatar Sep 22 '25 03:09 chrisblakley