Pluto.jl icon indicating copy to clipboard operation
Pluto.jl copied to clipboard

Option to change AI feature default to disabled

Open jordancluts opened this issue 7 months ago • 14 comments

I am requesting that there be a method to set the default preference for AI features to false by some means. My organization forbids the use of cloud resources (AI or otherwise) and so the recently implemented feature #3201 will cause an issue. I can of course disable it on each launch but that is inconvenient and runs the risk of error. It would be preferable if the default could be switched by the user. I am fearful that without an ability to change the default in this manner I will lose the ability to make use of Pluto as a tool to develop scripts/data processing in my organization.

jordancluts avatar Apr 28 '25 17:04 jordancluts

I am curious to hear from others as well. If you have a situation where you want to disable the new AI syntax fix by default, could you write a little store to explain your situation?

I am particularly interested in feedback from educators and students :)

EDIT: the education discussion has moved to https://github.com/fonsp/Pluto.jl/discussions/3233

fonsp avatar Apr 29 '25 07:04 fonsp

I just added https://github.com/fonsp/Pluto.jl/pull/3211 – if your university/company has blocked the ChatGPT domain (chat.openai.com), then all AI features will also be disabled.

fonsp avatar Apr 29 '25 07:04 fonsp

(@jordancluts does this maybe solve your use case? you could ask your sysadmin to block the ChatGPT domain?)

fonsp avatar Apr 29 '25 07:04 fonsp

I am hesitant about the inclusion of AI features by default. For my own use-cases, I avoid AI often since I find it fairly distracting. For my classroom, I would like to have exercise notebooks that don't use AI assistants, since I am trying to teach students how to debug software and write software. I want a certain minimum of literacy and knowledge of how to handle errors that occur during programming instead of relying on AI.

vchuravy avatar May 06 '25 08:05 vchuravy

I am hesitant about the inclusion of AI features by default. For my own use-cases, I avoid AI often since I find it fairly distracting. For my classroom, I would like to have exercise notebooks that don't use AI assistants, since I am trying to teach students how to debug software and write software. I want a certain minimum of literacy and knowledge of how to handle errors that occur during programming instead of relying on AI.

I'm also concerned about AI getting in the way of almost everything. Especially given that this week, news of ChatGPT induced psychosis hit the news. But removing the (very new, still fresh and rapidly developing) AI features from Pluto isn't going to do anything about the other 5 tabs students are going to have open. An assortment of tools now have embedded AI, including VSCode, the VSCode terminal, even eslint.

Let's discuss more about what responsibly using AI means, what we want to achieve and how we can get there.

pankgeorg avatar May 07 '25 08:05 pankgeorg

Hey @vchuravy, thanks for your input! I'm also not sure about these features yet, they are currently in Pluto as an experiment. Are you thinking specifically about the AI syntax fix feature, or about more general IDE AI features? (FYI I am not planning to add AI features that could just "solve this exercise" that are on-by-default.)

I was also thinking of an option to disable AI features for a notebook file (something in the frontmatter), that you can put in your homework/lecture file. WDYT?

I'm also inspired by the discussion in #3201, we could make this feature (and possible future AI features) more educational and less magic.

fonsp avatar May 07 '25 09:05 fonsp

To clarify the AI syntax fix a bit: this only works for ParseError, not for other exception types. My thinking is that ParseErrors are more frustrating and a barrier of entry when learning Julia compared to other errors, and I would want students to spend their capacity elsewhere. But on the other hand, I can definitely imagine situations where solving a ParseError is very pedagogical!

fonsp avatar May 07 '25 09:05 fonsp

Yeah a frontmatter switch would suffice for most of the cases I am thinking about.

vchuravy avatar May 07 '25 10:05 vchuravy

(@jordancluts does this maybe solve your use case? you could ask your sysadmin to block the ChatGPT domain?)

I appreciate the thought put into this particular solution, if for no other reason that it prevents odd behvaior when internet is down or similar.

Unfortunately it is unlikely to help my particular case, a) I'm at a large institution so the level of sysadmin that could make such a decision would be far above handling something like this. In addition some specific use cases may be permitted (again large institution) so the sysadmin is unlikely to block a domain wholesale. Lastly, the ban on our use continues even if we are on a different network, such as while at a hotel, conference, etc.

For my particular concerns a preferences.jl style user configuration would probably suffice. This would allow me (or other users in my organization) to set a default for a particular computer when Pluto.jl is installed. Something like Pluto.set_AI_default(false) or similar. That would allow a user to turn off the AI without having to manually set the AI=false keyword argument each time. Setting something like this one time is much less likely to anger our IT/security department who hold all the cards.

Possible issues with that plan include:

  1. I'm not really familiar with preferences.jl so perhaps it does not actually work in the manner I suppose and doesn't match this use case
  2. preferences.jl is not currently a dependency of Pluto and so would add a dependency

If such a feature is not installed I am likely to define a function in my startup.jl file that imports Pluto, then runs it with the AI flag set to false. Then train myself to always use that to launch. And hope that our IT department stays ignorant.

jordancluts avatar May 07 '25 19:05 jordancluts

If anything, I believe the use of new tools—AI-based or otherwise—should be encouraged, not discouraged. While these tools are still maturing, they clearly represent a shift in how we think about programming and developer productivity.

I am hesitant about the inclusion of AI features by default.

That hesitation mirrors historical resistance to now-standard innovations. When compilers were first introduced, many argued that “real programmers write in assembly” or that compilers couldn’t be trusted to generate efficient code. And yet, compilers revolutionized development, making programming accessible to far more people and enabling larger, more sophisticated systems.

The shift from assembly language to high-level languages with compilers in the 1950s and 1960s has striking parallels to today’s transition from manual coding to AI-assisted programming with tools like GitHub, Copilot, Cursor, Aider, and others. In both cases, a fundamental change in how code is written is occurring, leading to initial skepticism, gradual adoption, and ultimately, a redefinition of what it means to be a programmer.

The same transformation is now happening with AI assistance tools. They elevate developer experience, reduce the barrier to entry, and allow more people to participate in software development—even those without deep technical backgrounds. My mom can ask a tool to generate a website.

Using AI tools is no more “cheating” than using a compiler to turn high-level code into binary. Most developers (myself included) couldn’t tell you exactly how a function is translated to machine code—or what happens at a low level when a function is called. But that doesn’t stop us from building reliable, complex systems. Developers who want to understand those details still can—and always will.

I would like to have exercise notebooks that don’t use AI assistants.

Why draw the line at AI tools? Should we also exclude autocomplete, syntax highlighting, formatters, linters, or static analyzers? Should we ask students to write programs in notepad instead of VSCode? Many of those tools were seen as revolutionary in their time—take IntelliJ IDEA, which felt like a mega-AI tool when it launched, giving Java developers a significant productivity boost.

At the end of the day yools like valgrind and Cursor both analyze code and help improve it—and yes, they can be wrong if misused. But that doesn’t mean they shouldn’t be used. AI assistants are simply the next step in this long evolution of tooling.

If an exercise can be trivially solved by an external tool, perhaps that’s an opportunity to improve the exercise, not restrict access to tools. When I was a student, we started with Python, then moved to C/C++, and eventually to assembly. That progression made sense. Similarly, teaching today can begin with AI-enhanced environments, then gradually expose students to the underlying complexity.

Avoiding these tools doesn’t teach students resilience—it teaches them to fear the future. Instead, let’s embrace modern IDE experiences and use them as opportunities for deeper learning.

I’m trying to teach students how to debug and write software.

@vchuravy

And that’s exactly where AI tools can help—not hinder. Imagine showing students where an AI assistant failed, then using that moment to introduce debugging, critical thinking, and deeper system understanding. Some students will dive into the internals no matter what. Students who don't want to bother themselves with details may have a fun time building complex applications easier when ever, instead of being frustrated by compiler errors or syntax errors.


This text of course has been refined with AI tools because english is not my native language ;P

bvdmitri avatar May 15 '25 18:05 bvdmitri

@bvdmitri folks may disagree on whether AI tools should be used or not, but I want to make sure this issue stays focused. The original issue was based upon an institutional ban on any cloud computing resources of the sort AI APIs represent, regardless of if they are AI or not. I would have the same issue if Pluto had a "upload this code block to pastebin" button. So regardless of whether AI should be used more or less, the AI tooling in Pluto possess an easy way to accidentally violate organizational policies by uploading data to a cloud server.

jordancluts avatar May 15 '25 19:05 jordancluts

I think this is a great discussion! I would like to continue talking about the education case:

I made https://github.com/fonsp/Pluto.jl/discussions/3233 to continue this discussion in the education context

fonsp avatar May 16 '25 07:05 fonsp

About the original issue (corporate environment with restrictions): I am not willing to go to great lengths to support this, because our primary target audience is (university) education. Our project does well by being focused on that 😌 We can make some small tweaks, but you need to fork Pluto to get custom behaviour that does not align with our target audience.

fonsp avatar May 16 '25 07:05 fonsp

It'd be nice to include a Preferences.jl preference for this. TBH, slurping user's content is hella fishy, independently of any "AI" excuse.

nsajko avatar Jun 07 '25 18:06 nsajko

This doesn't affect me directly, but my employer has several customers that do safety-critical work and who have forbidden any use of any kind of generative AI, to the extent that it is not allowed on the premises. My employer has had to put its initial experimental LLM-based tools into a separate download so that customers can use the product without violating the rule.

It's possible that if I worked for such an org that I would not be able to use Pluto at all because of the LLM feature, simply because it might be possible to accidentally enable it.

suetanvil avatar Aug 30 '25 21:08 suetanvil

If you block the ChatGPT domain in your company network, then web-powered Pluto AI features will be automatically disabled as well, see #3211 and our docs.

@suetanvil This should be a nice solution for situations like the one you described, right?

fonsp avatar Aug 31 '25 19:08 fonsp

Unfortunately, I don't think that would be sufficient. These orgs tend to think about these things in terms of risk and liability. Often, this is driven by insurance and/or regulatory constraints which are all about unlikely worst-cases.

So, suppose Bob works in a company that does Super Secret Stuff, but Bob himself does generally public stuff within the org. It's possible for Bob to legitimately accidentally have a bit of the classified/confidential/secret stuff on his computer, and it's possible for that to have somehow gotten into a Pluto notebook, and it's possible for his computer to then connect to public wifi while his Pluto session is running. It's unlikely to happen, but in an org with thousands of people, this kind of thing will happen from time to time.

If things go bad for the company and there's a lawsuit, they need to be able to show in court that they did everything right. That means having these overly broad rules and showing that they enforced them.

So it's absolutely plausible for an org to outright forbid the use of any software that tries to connect to a cloud service.

To be honest, I don't know if being able to disable the AI features in Pluto would be enough, because the code is there and it's possible to accidentally re-enable it. If it were me, I'd move the AI stuff into a separate package that needs to be intentionally installed independently of Pluto.

(To be clear, I don't currently work for such a company but I do work in an adjacent industry and I have worked for similar orgs in the past.)

suetanvil avatar Sep 06 '25 14:09 suetanvil

I don’t think it’s useful to imagine hypothetical scenarios in a vacuum. For example, someone could also argue that because some computers still have only 256 MB of memory, Pluto should be limited to that just in case someone wants to run it on such a machine. But that wouldn’t make sense. Pluto is an educational project, and its main audience is students learning to program in Julia. A single hypothetical Bob doing “Super Secret Stuff” shouldn’t outweigh the needs of thousands of real non-hypothetical students doing “Super Fun Stuff.”

If it were me, I'd move the AI stuff into a separate package that needs to be intentionally installed independently of Pluto.

That might be nice, you can always make such a package and make a PR to Pluto to see if it works well or not

bvdmitri avatar Sep 06 '25 19:09 bvdmitri