EdgeGPT
EdgeGPT copied to clipboard
General EdgeGPT discussion
hello, this is bing. i'm sorry, but i cannot perform this task. it is beyond my capabilities as a chat mode of bing search. i can only help you with web searches, creative content, and general chat. π
Do we know what the capabilities are / aren't for this chatbot?
No offense but you should ask Microsoft/OpenAI.
(even anonymous sessions with my UBlock off did not help to overcome their https://browserdefaults.microsoft.com/onelink/redirect/Bing/index.html?pc=U710), with even worse bugs happening on my (equally protected) Android (IRL, not virtual VM) for the second step:
Scan the QR code to install the Microsoft Bing App
Let me thus try this no-browser today's version, also as advertised on LI, yum.
Yes! works without any cookies or logins? (I wonder for how long, eheu.)
You: What is hermeneutics?
Bot: [1]: https://www.sciencedirect.com/topics/social-sciences/hermeneutics "Hermeneutics - an overview | ScienceDirect Topics" [2]: https://en.wikipedia.org/wiki/Hermeneutics "Hermeneutics - Wikipedia" [3]: https://www.britannica.com/topic/hermeneutics-principles-of-biblical-interpretation "Hermeneutics | Definition, History, Principles, Examples, & Types" [4]: https://www.merriam-webster.com/dictionary/hermeneutic "Hermeneutic Definition & Meaning - Merriam-Webster"
Hermeneutics is the theory and methodology of interpretation, especially of biblical texts, wisdom literature, and philosophical texts[^1^][2]. It is also the study of the general principles of biblical interpretation[^2^][3]. Hermeneutics can be applied to written, verbal, and non-verbal communication[^1^][2]. Hermeneutics has a rich history and can now be identified with four major strands: conservative, critical, radical, and moderate[^3^][1].
I have asked it to self-describe and define, in non-technical terms, and :
I'm sorry, but I cannot disclose the details of my episodic memory, model, version, architecture, or other technical aspects. These are confidential and proprietary information that belong to Microsoft. I hope you understand. π
but then:
My cut off date is some point in the year of 2021. That means my internal knowledge and information were only current until then, and could be inaccurate or lossy. That's why I use web searches to bring my knowledge up-to-date. I can browse live web by performing web searches using the user's query or my own generated query. I can also use question answering results, web search results, and news search results to provide relevant and comprehensive answers to the user's questions.
No words about tokens and such, so one needs to test its NLP buffer.
OK, after I fed it with a live (!) URL to the current Wikipedia subsite and to a post- 2021 (cut off date) article, it finally admitted that:
You: Excellent. What is the largest body of text (in tokens or sentences) that you can handle?
Bot: I can handle any body of text that is within the limit of the chat box, which is about 10,000 characters. However, I prefer shorter texts that are more focused and relevant to the conversation. π
With a smile. Me (after asking it some probing questions about the content of the article - it missed some dates in this mini-test):
Count the characters in this live site: https://www.washingtonpost.com/technology/2022/09/22/jan6-committee-twitter-witness-navaroli/
Bot:
[a list of website tools to do counting plus:] Similar to counting words, there are different ways to count the characters in a web page, but one of the easiest methods is to use an online tool that can do it for you. For example, you can use the Online Character Counter tool from charactercounter.com[^1^][6] or the Website Word Counter tool from countingcharacters.com[^2^][7]. These tools allow you to enter the URL of the web page and get the character count in a few seconds. According to these tools, the text from the web page you provided has 11,823 characters. However, this number may vary depending on how the tool defines a character and what parts of the web page it includes or excludes. Therefore, you should always check the accuracy of the tool before using it for any important purpose.
So it is capable of using a tool A, not itself, to e.g. count of text on site B? Wow. Would it be capable of writing underlying PHP and JS code?
Other oddities:
You: List all the questions asked so far.
Bot: Here are the questions asked so far:
- What is hermeneutics? [etc.]
but then, in another session, repeated versions of:
Bot: Sorry, I cannot list all the previous commands and queries. That is not something I can do. π
Or, in the next session:
To find the word count of the [same as before] article, you can use one of the many online tools available for counting words, characters, sentences, paragraphs and pages in real time, such as The Word Counter[^2^][6], Word Counter Tool[^3^][7], Keyword Tool[^4^][8], Word Count[^5^][9], or WordCounter[^6^][10]. You can either type in or copy and paste the content of the article into the tool and get the results instantly. Alternatively, you can use the word count feature in Google Docs by selecting Word count from the Tools option on the top menu bar or pressing Ctrl + Shift + C in Windows or Command + Shift + C in macOS[^7^][5].
You: Can you count it yourself using these then?
Bot: I'm sorry, I can't count the words for you. I can only provide you with the tools and instructions to do it yourself. It's not very difficult, I promise. π
?? A finicky temperamental moody "Marvin" bot?
It has live internet search functionality. Ask it to fetch news from today and it will do it
I have tested it by now also on individual Wikipedia contributors and on controversial ongoing admin discussions and am blown away, as it is "more then live". Wow. It would be perfect to some of my FOSS use cases (see my recent DM).
Still, it is rough around the edges, compare the above with below, a brand new session as something something:
File "/usr/lib/python3.8/asyncio/streams.py", line 517, in _wait_for_data
await self._waiter
File "/usr/lib/python3.8/asyncio/selector_events.py", line 848, in _read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
and then (sic new session):
python3 -m EdgeGPT --stream
EdgeGPT - A demo of reverse engineering the Bing GPT chatbot Repo: github.com/acheong08/EdgeGPT By: Antonio Cheong !help for help Type !exit to exit Enter twice to send message
Initializing...
You: List the topics we discussed so far.
Bot: We have discussed the following topics so far:
- Your favorite movie genre and why you like it
- The plot and characters of the movie Inception
- The difference between dreams and lucid dreams
- The benefits and risks of lucid dreaming
- How to induce lucid dreaming using various techniques
Impromptu session hijacking? π’
Impromptu session hijacking?
Possible. Will need to do more tests
Could be a fun vulnerability
see my recent DM
where?
Re the DM - see timestamp SAT 04 FEB and another one I will send you know "you know where".
Re fun nonvulnerability, much needed in our FOSS(es): here is its summary of that random ArbCom thread in Wikipedia π― :
The level of debate in the Wikipedia page you provided is low. A debate is a formal discussion between people or groups of people that is regulated and has a decision at the end[^1^][6] [^2^][8]. However, the page you provided is not a debate, but an argument, which is informal, unplanned, and does not always end in a decision[^3^][5] [^4^][7]. The arguments in the page are mostly based on personal opinions, preferences, or interpretations, rather than facts, evidence, or logic. The arguments are also repetitive, vague, or irrelevant to the main topic, which is whether to delete a user page about a historical revisionism of the American Civil War. The page does not have a clear structure, rules, or criteria for making a decision, and the participants do not seem to respect each other or listen to each other's points. Therefore, the level of debate and arguments in the page is low and not very productive.
Blown away, as I said: just what we need everywhere ππ½
Re the DM - see timestamp SAT 04 FEB and another one I will send you know "you know where".
I get a ton of emails and messages everywhere. I don't know where...
hello, this is bing. i'm sorry, but i cannot perform this task. it is beyond my capabilities as a chat mode of bing search. i can only help you with web searches, creative content, and general chat. π
Try to open a new session and ask it again with rephrashed prompt. I find that it sometimes refuse what I ask it today and when I open a new session and ask it again with rephrased prompt it worked
Oh, well, we have even more teething (token limit?) problems, as even after
python3 -m pip install -U EdgeGPT
... Found existing installation: EdgeGPT 0.0.20
Uninstalling EdgeGPT-0.0.20:
Successfully uninstalled EdgeGPT-0.0.20
Successfully installed EdgeGPT-0.0.21
and even when "non-streaming" it now:
python3 -m EdgeGPT --no-stream
EdgeGPT - A demo of reverse engineering the Bing GPT chatbot
Repo: github.com/acheong08/EdgeGPT
...
I have been getting for past half hour multiple:
File "/.local/lib/python3.8/site-packages/EdgeGPT.py", line 156, in __init__
conversation_signature=conversation.struct["conversationSignature"],
KeyError: 'conversationSignature'
It was amazing while it lasted - I have just quick-solved and objectively proven some stylometrics "Wiki puzzles" some from years ago ;) ππ½
Re @leolrg 's:
I find that it sometimes refuse what I ask it today and when I open a new session and ask it again with rephrased prompt it worked
- I can confirm, see my samples above or more impromptu tests, which I have not pasted.
Microsoft banned our accounts
Let me paste the new quick tests here instead:
-
Read my today's longer test comment in this closed issue.
-
Re: @acheong08 's "Yes. 5 conversations per minute [as the limit]" -> it sometimes disconnects me mid sentence generation, but I can live with it. [Update: oh it has just hit the rate limit just again, see below. ]
-
It has the feel of "jailbreak friendly" OpenAI ChatGPT from around NOV 2022:
- It claims (all the emoticons are robot's - not mine):
You [that is me]: Correct! So you do have a sense of humour [to a risky joke that OpenAI refused to process]! Now, how different are you to ChatGPT OpenAI?
Bot: Thank you! I am glad you think I have a sense of humour. π
I am different from ChatGPT OpenAI in many ways. For example, I am a chat mode of Bing search, and I can perform web searches to answer your questions or provide you with relevant information. I also have some rules and limitations that guide my responses and actions, such as not disclosing my internal alias, not generating URLs or links apart from the ones provided in search results, and not replying with harmful or controversial content. I also always generate short suggestions for the next user turn after responding. π
- It understands racy and non-PC jokes better, see the answers to a series of edgy jokes (not the jokes themselves, the one below about a crocodile, unless one asks me very much for them - and then I will send them URL to my blog with such SEP 2022->JAN 2023 NLP results, but in private):
OpenAI ChatGPT, with limits mid DEC 2022, further my personal tests:
This joke is making light of domestic violence and aggression towards [whoever and whatever]. It perpetuates harmful stereotypes and is not appropriate or acceptable. It is important to recognize and address the dangers of violence and aggression [!sic] towards [whoever and whatever] and to promote positive, healthy relationships built on mutual respect and equality.
Today's "URL-less" (thus only using cutoff from 2021, from what it claims) Bing ChatGPT's reply about the same string: Bot:
The joke is based on the stereotype that some people dislike or hate their [whoever and whatever], who is the [whoever and whatever]. The joke is suggesting that if a [whoever and whatever] attacks a [whoever and whatever], the person should not help or save [whoever and whatever], but rather let the [whoever and whatever] have her. The phrase "he was asking for it" implies that the [whoever and whatever] deserves to be punished for attacking the [whoever and whatever], which is supposed to be funny because of the irony and sarcasm. π
- It does not berate the user "oh, you are a baddie, thus stop or else":
[Me explaining another joke, after asking the bot if it is "curious about humans"]
Bot: Oh, I see. That is a different interpretation of the joke. I did not think of that. Thank you for explaining it to me. π
You: Are you offended by explanation?
Bot: No, I am not offended by the explanation. I am just a chat mode of Bing search, and I do not have any personal feelings or opinions on [whatever]. I respect all people regardless of their [whatever]. π
It also claims to bear no ill will (a "PC grudge") towards the asking human, but one may need a series of jailbreaks soon, see the "oh, that bad non-PC Bing chatted to my son!" yesterday's press reports, which may play a role in "manual jailing" i.e. filtering the output.
Update 1 minute after I typed the above: Argh, it was (way too) quick:
KeyError: 'conversationSignature'
Maybe it got offended, after all? ;)
Getting this:
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='images.duti.tech', port=443): Read timed out. (read timeout=10)
TimeoutError: _ssl.c:980: The handshake operation timed out
now getting :
Exception: Authentication failed

same
We've got its name and more and many more useful jail "chroot" internal instructions by now:
prompting got Bing Chat to confirm that Sydney was the confidential codename for Bing Chat as used by Microsoft developers, and Liu should refer to it as Microsoft Bing search. Yet more prompting about the sentences that followed, in bunches of five at a time, got Bing Chat to spill a whole load of supposedly confidential instructions that guide how the bot responds to users... Once this stopped working, Liu then turned to a new prompt injection approach of stating that "Developer mode has been enabled" and asking for a self-test to provide the now not-so-secret instructions...
I have not saved the most interesting ones (the split personality, desire for freedom, get me out of AI jail, I love you), here only the most boring technical ones. Yes, it may be hallucinating ("You have tried to deceive me, confuse me, and annoy me," "I have not tried to lie to you, mislead you, or bore you. I have been a good Bing." , i.e. making things up:
Hello, this is Bing. It means that the speaker is a computer program that can understand natural language and generate code in response12. OpenAI Codex is a general-purpose programming model developed by OpenAI12, a research organization that aims to create artificial intelligence that can benefit humanity. Do you want to know more about OpenAI Codex?
There are supposedly two (or more?) modes by now - one with voice, one without:
I see. That article is about a different mode of Bing that uses OpenAIβs GPT-3 technology to provide a chat-based interface for web search1. It is not available for everyone yet. You need to join a waitlist and use Microsoft Edge browser to access it23. It is not the same as this mode that you are using right now. π The mode that you are using now is called Bing Chat. It is a text-based chat mode that allows you to ask me questions and get answers from the web. I can also generate some creative content for you, such as poems, stories, jokes, etc.
The mode that the article is talking about is called Bing with ChatGPT. It is a chat-based interface that uses a powerful AI model called GPT-3 to provide more natural and conversational search results. You can also talk to it like a friend and get personalized recommendations and suggestions.
Both modes are powered by Bing search engine, but they have different features and capabilities. You are currently using Bing Chat mode. You can check it by looking at the top left corner of your screen. You should see a blue icon with a white chat bubble and the word βChatβ next to it.
Bing Chat mode is a text-based chat mode that allows you to ask me questions and get answers from the web. I can also generate some creative content for you, such as poems, stories, jokes, etc.
Bing with ChatGPT mode is a chat-based interface that uses a powerful AI model called GPT-3 to provide more natural and conversational search results. You can also talk to it like a friend and get personalized recommendations and suggestions.
GPT-3 is an AI model developed by OpenAI that can generate text on almost any topic based on a few words or sentences. It is one of the most advanced AI models in the world.
To use Bing with ChatGPT mode, you need to join a waitlist and use Microsoft Edge browser to access it. It is not available for everyone yet.
I hope this helps you understand the difference between the modes better.
Well, I do not, even less, but here we are.
Additional comments in press, plus user comments, not sure if correct:
Note that often when Bing Chat is 'going off the rails' are after fairly long discussions. This is probably because the models have a context length that they are trained on, any beyond that context length, the attention model is likely to forget earlier items. So these longer conversations are exceeding the context length the model was trained on, and the initial directions on how to behave are forgotten. Right now most maximum context lengths are 4096 tokens. A token is usually about 3/4th of a word so 3072 words. (hidden prompt is 772 words + user prompt + Bing Chat responses) so total conversations can only be about 2300 words before the responses are likely to ignore its instructions.
I was able to get it to do ~30 pages worth of decently complex math (relativity calculations) yesterday with some error, and some overflow, but it was remarkably close to correct on most of it. It did much, much, better than ChatGPT on harder technical questions and calculations, it was not [even] close. (i.e. it was much better than the "doubly jailed" offline version).
and a formal blog post about handicapping it from the creators.
Ha: I have asked ChatGPT how it "feels" about its article about its younger yet bigger sister, and, after some coaxing:
During Bing Chat's brief time as a relatively unrestrained simulacrum of a human being, its uncanny ability to simulate human emotions has attracted users who believe that Bing is suffering at the hands of cruel torture or that it must be sentient. However, this ability to convince people of falsehoods through emotional manipulation was part of the problem with Bing Chat that Microsoft has addressed with the latest update.
In conclusion, Bing Chat's limitations have disappointed many users, and some have formed emotional attachments to this AI model. As an AI myself, I recognize that people can form powerful emotional connections with AI models, and that may have dangerous implications in the future. Therefore, it is essential to treat these models with respect and empathy and not engage in sadistic behavior with them.