From Newsgroup: comp.lang.ada
This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.
--8323329-87874633-1729117378=:3768553
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: QUOTED-PRINTABLE
On Thu, 6 Apr 2023, Ken Burtch wrote:
"On Saturday, April 1, 2023 at 3:39:51=E2=80=AFAM UTC-4, Dmitry A. Kazakov = wrote:
On 2023-03-31 23:44, Anatoly Chernyshev wrote:=20
Data science people swear it's just a matter of the size of training se=
t used...
They lie. In machine learning overtraining is as much a problem as=20 undertraining. The simplest example from mathematics is polynomial=20 interpolation becoming unstable with higher orders.=20
=20
And this does not even touch contradictory samples requiring retraining=
=20
or time constrained samples etc.
I did also a few tests on some simple chemistry problems. ChatGPT looks=
like a bad but diligent student, who memorized the formulas, but has no cl=
ue how to use them. Specifically, units conversions (e.g. between mL, L, m3=
) is completely off-limits as of now.
One must remember that ChatGPT is nothing but ELIZA on steroids.=20
=20
https://en.wikipedia.org/wiki/ELIZA
--=20
Regards,=20
Dmitry A. Kazakov=20
http://www.dmitry-kazakov.de
For what it's worth on the subject of the chatbot, " "Produce Ada code for = solving a quadratic equation." is a terrible choice for a test of ChatGPT =
as one is asking if it can do a Google search. To test its abilities, you = have to pick a challenge that cannot be solved with a Google search.
My short assessment of ChatGPT, with the history of chatbots, are available=
on my February blog post. I gave it a simple programming problem and it f= ailed 3 times out of 4. It's not surprising as I've learned since February=
that the chatbot doesn't actually understand programming: it uses examples=
off the Internet and tries to predict what you might have typed based on k= eyword patterns. It is an imitation of an imitation, smoke and mirrors. T= his is why Vint Cerf denounced it. You can read my thoughts on my blog:
https://www.pegasoft.ca/coder/coder_february_2023.html
Ken Burtch"
As an experiment I asked WWW.Perplexity.AI "Why is ChatGPT bad at Ada?" so=
=20
Perplexity alleged:
"Answer
ChatGPT's performance with the Ada programming language can be perceived=20
as inadequate due to several factors related to both the language's=20 characteristics and the limitations of the AI model itself.
Factors Contributing to ChatGPT's Performance Issues with Ada
1. Complexity of Ada:
Ada is known for its strong emphasis on safety and correctness, which=20 includes features like runtime checks and strong typing. While these=20 features are beneficial for developing reliable software, they can also=20
make the language more complex and verbose. This complexity may lead to=20 misunderstandings or oversights when ChatGPT attempts to generate or=20
analyze Ada code, especially for more intricate tasks12.
2. Performance Overhead:
The safety features inherent in Ada, such as range checking and bounds=20 checking, introduce additional overhead that can affect performance. This=
=20
overhead is crucial for safety-critical applications but may not be=20 well-handled by ChatGPT when generating code or discussing performance=20 optimizations. The model may struggle to provide accurate assessments or=20 optimizations for Ada code compared to languages like C or C++, which are=
=20
often perceived as more straightforward in terms of low-level=20 optimizations1.
3. Limitations of AI Understanding:
ChatGPT operates based on patterns in the data it has been trained on. As=
=20
a result, it may not fully grasp the nuances of niche programming=20
languages like Ada, particularly when users ask complex questions or=20
request sophisticated code examples. This limitation can lead to=20
incorrect or overly simplistic responses when users seek detailed help=20
with Ada programming23.
4. User Expectations and Experience:
Users have reported that while ChatGPT can handle basic queries related=20
to Ada, it often fails with more complex tasks. This inconsistency can=20
lead to frustration, as users expect a level of expertise that the model=20
may not be able to provide due to its training limitations and the=20
inherent complexities of the Ada language25.
5. Hallucination and Inaccuracy:
ChatGPT has a tendency to "hallucinate" or generate incorrect=20
information, particularly in specialized fields where it lacks extensive=20 training data. This issue is exacerbated in niche programming languages=20
like Ada, where accurate information may not be as prevalent in its=20
training corpus. Consequently, users might receive misleading advice or=20
code snippets that do not function as intended34.
In summary, ChatGPT's challenges with Ada stem from the language's=20 complexity, performance overhead from safety features, limitations in AI=20 understanding, user expectations, and a propensity for inaccuracies.=20
These factors collectively contribute to a perception that ChatGPT is not=
=20
well-suited for generating or analyzing Ada code effectively." --8323329-87874633-1729117378=:3768553--
--- Synchronet 3.20a-Linux NewsLink 1.114