Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

danupsher

macrumors member
Original poster
Tiger Terminal - A modern terminal emulator for PowerPC Macs

I've been working on bringing a usable terminal experience to PowerPC Macs still running Mac OS X Tiger (10.4). The result is Tiger Terminal - a self-contained .app that
runs on any G4 or G5 Mac with no dependencies to install.

What it does:
- Canvas-based rendering with Tk - smooth and flicker-free
- 256-color and true color support (Apple's Terminal.app on Tiger only did ~16)
- Full VT100/xterm emulation - vim, tmux, htop all work
- Tabbed interface (Cmd+T, Cmd+1-9)
- Mouse tracking for interactive apps
- Copy/paste, scrollback, font resizing
- Catppuccin Mocha color scheme

How it works:

The whole app is 5 Python files (~2500 lines) running on a bundled Python 3.13 - cross-compiled for PPC from a modern Linux box. The .app includes a C launcher that
resolves paths relative to the bundle, so you just drag it to Applications and it works. No Xcode, no Python install, no package manager needed.

34 MB download, ~99 MB installed.

Download: https://github.com/danupsher/tiger-terminal/releases

Source is on GitHub if anyone wants to build it themselves or adapt it for other vintage Mac setups: https://github.com/danupsher/tiger-terminal

Tested on an iMac G5 running Tiger 10.4.11. Should work on any PPC Mac with Tiger.

1772246411787.png
 
Updated release:


added features and some bugs fixed.

Added full Claude code cli support (via ssh) for the vibe coders who want to use their G5 as a terminal input machine.


imac-screenshot.png
 
Updated release:


added features and some bugs fixed.

Added full Claude code cli support (via ssh) for the vibe coders who want to use their G5 as a terminal input machine.


View attachment 2608579
Claude Code/Codex/OpenCode all work in iTerm 2 via SSH - although I have no idea if the last version supporting Tiger does. Curious to know more about your cross compilation setup... are you using a network share for both devices, running `gcc` on a Linux box and then once the object files are compiled, you invoke Tiger's `ar` to link everything?
1772303621513.png
 
Last edited:
I was going to ask if this was AI generated slop, but then I saw you admitted it yourself. Which brings the question, why did you replace the em-dashes with regular dashes?
 
I was going to ask if this was AI generated slop, but then I saw you admitted it yourself. Which brings the question, why did you replace the em-dashes with regular dashes?
what?

whats the problem with using AI?
 
Claude Code/Codex/OpenCode all work in iTerm 2 via SSH - although I have no idea if the last version supporting Tiger does. Curious to know more about your cross compilation setup... are you using a network share for both devices, running `gcc` on a Linux box and then once the object files are compiled, you invoke Tiger's `ar` to link everything?
View attachment 2608611
Hey! There is no network share!
The flow is:
ld wrapper SCPs the .o files to the iMac G5 over SSH
ld64 on the iMac links them into a final Mach-O executable
SCP the binary back to Linux

I will try I Term 2 on Tiger and see if that works though!
 
Hey! There is no network share!
The flow is:
ld wrapper SCPs the .o files to the iMac G5 over SSH
ld64 on the iMac links them into a final Mach-O executable
SCP the binary back to Linux

I will try I Term 2 on Tiger and see if that works though!

Cross compiler setup i use is here
 
whats the problem with using AI?
You are supporting large corporations who are vacuuming up open-source code with complete disregard for licenses, then regurgitating it without attribution. Furthermore, this codegeneration uses massive amounts of power and water, and is putting a massive strain on RAM and GPU supply, harming consumers. By using such tools, you learn nothing and only end up generating massive amounts of unmaintainable code.

Your repository states the code is MIT licensed, which isn't true. Currently, AI-generated code cannot be licensed, since it was generated by a machine. In the future, though, you may be open to lawsuits since you're distributing licensed code under a different license, and likely violating other parts of the licenses of the code you've stolen.
 
You are supporting large corporations who are vacuuming up open-source code with complete disregard for licenses, then regurgitating it without attribution. Furthermore, this codegeneration uses massive amounts of power and water, and is putting a massive strain on RAM and GPU supply, harming consumers. By using such tools, you learn nothing and only end up generating massive amounts of unmaintainable code.

Your repository states the code is MIT licensed, which isn't true. Currently, AI-generated code cannot be licensed, since it was generated by a machine. In the future, though, you may be open to lawsuits since you're distributing licensed code under a different license, and likely violating other parts of the licenses of the code you've stolen.
Its a fun little terminal emulator written in python to make my imac g5 more useful. i wrote it with ai assistance, same way people use stack overflow or autocomplete or asking a mate for help.

as for learning nothing, i now have a working terminal emulator on a 20 year old powerpc mac. i've learned more about tk and darwin internals than i would have any other way. if you have a specific concern about a specific piece of code violating a specific license i'm happy to look at it. otherwise this is just a general objection to tools you don't like. have a good day mate
 
Your repository states the code is MIT licensed, which isn't true. Currently, AI-generated code cannot be licensed, since it was generated by a machine.
This is not how U.S. copyright law works in practice. If a person prompts, reviews, and integrates AI output into a project, that human contribution matters. Attaching an MIT license to a repository is about how the creator is letting other people use their work. The issue is whether the repository contains copyrighted material from another source, which is not some inherent truth to how AI generates code.

In the future, though, you may be open to lawsuits since you're distributing licensed code under a different license, and likely violating other parts of the licenses of the code you've stolen.
This is not a valid legal argument. You can't just own something like a common python pattern or a terminal emulator loop. You would have to prove verbatim reproduction, distinctive, traceable code - these are high bars to clear in a court of law.

In my opinion, these questions are more important:
- Does the code work and do what he says it does?
- Does it solve an actual problem and is it useful?
- Did he personally review the code and understand it before shipping it?
- Is the code maintainable?

I feel like you had a perfect opportunity to explain WHY this project is weird - there's a lot of low hanging fruit like creating a Terminal emulator in Python and then wrapping it in C... or using GCC 7 with a cross compile setup that will absolutely cause weird bugs as project complexity increases, but instead of being constructive you just write it off as slop, which is arguably just as lazy as the 'slop' you're criticizing.
 
This is not how U.S. copyright law works in practice. If a person prompts, reviews, and integrates AI output into a project, that human contribution matters. Attaching an MIT license to a repository is about how the creator is letting other people use their work. The issue is whether the repository contains copyrighted material from another source, which is not some inherent truth to how AI generates code.


This is not a valid legal argument. You can't just own something like a common python pattern or a terminal emulator loop. You would have to prove verbatim reproduction, distinctive, traceable code - these are high bars to clear in a court of law.

In my opinion, these questions are more important:
- Does the code work and do what he says it does?
- Does it solve an actual problem and is it useful?
- Did he personally review the code and understand it before shipping it?
- Is the code maintainable?

I feel like you had a perfect opportunity to explain WHY this project is weird - there's a lot of low hanging fruit like creating a Terminal emulator in Python and then wrapping it in C... or using GCC 7 with a cross compile setup that will absolutely cause weird bugs as project complexity increases, but instead of being constructive you just write it off as slop, which is arguably just as lazy as the 'slop' you're criticizing.
Thanks for the feedback! I'm learning as I go and I realise the cross compiler setup is a bit janky. the C wrapper just execs the bundled python interpreter. I didn't know a better way to do it. I'm gunna get a lot better though! learning more every day.
 
  • Like
Reactions: AdamBuker
You are supporting large corporations who are vacuuming up open-source code with complete disregard for licenses, then regurgitating it without attribution.

As a critic of these companies myself I'll point out you are repeating myths. Github literally offers these language models to all Github subscribers.

Furthermore, this code generation uses massive amounts of power and water, and is putting a massive strain on RAM and GPU supply, harming consumers.

You are confusing generation with training the models. Generation uses very little power and datacenters are multi-purpose so the water usage isn't solely for AI.

As an example, if you keep your computer on for 12 hours while you code fully by hand you are using just as much energy as generating code for 30 minutes. Same volume of 'usable' output and similar energy consumption.

You can test that out by running a local large language model.

A better argument would be to focus on AI slop and security issues.

By using such tools, you learn nothing and only end up generating massive amounts of unmaintainable code.

That can or can't happen. This will be on the user at the end of the day. If you use models to code you should also study a programming language or computer sciences for best results, otherwise you won't be as good as someone who is knowledgeable.


Currently, AI-generated code cannot be licensed, since it was generated by a machine.

Often a user is told by a model where the code comes from if it is not original. The user makes a request. The model searches online and tells the user about repos they found useful code on. From that point on the user has to look at the licences and make the correct judgement. The process isn't "automated" like some AI fanatics suggest.
 
Last edited:
  • Like
Reactions: danupsher and srp
As a critic of these companies myself I'll point out you are repeating myths. Github literally offers these language models to all Github subscribers.



You are confusing generation with training the models. Generation uses very little power and datacenters are multi-purpose so the water usage isn't solely for AI.

As an example, if you keep your computer on for 12 hours while you code fully by hand you are using just as much energy as generating code for 30 minutes. Same volume of 'usable' output and similar energy consumption.

You can test that out by running a local large language model.

A better argument would be to focus on AI slop and security issues.



That can or can't happen. This will be on the user at the end of the day. If you use models to code you should also study a programming language or computer sciences for best results, otherwise you won't be as good as someone who is knowledgeable.




Often a user is told by a model where the code comes from if it is not original. The user makes a request. The model searches online and tells the user about repos they found useful code on. From that point on the user has to look at the licences and make the correct judgement. The process isn't "automated" like some AI fanatics suggest.
Its not like I'm one shotting a fitness tracker app in react. I've vibe coded a standalone Linux GCC 15 compiler for PPC. Nobody does that if they are not extremely interested in what they are doing. Its required me to learn lot as I go. There are literally no negatives to what I am doing imo. Nobody is forced to use any of the stuff I'm building. Anti AI people are strange
 
That can or can't happen. This will be on the user at the end of the day. If you use models to code you should also study a programming language or computer sciences for best results, otherwise you won't be as good as someone who is knowledgeable.
Unfortunately, like most things on the internet, nuance is not rewarded in discussion.

I work in IT Observability, so most of my job is just to look at and manage data - here's some data, automate the sending of this data, write visualizations for that data, etc. Of course we use AI for this, and have been doing so for years. Internally, we're not asking our agent, "Hey, load this 700MB raw SQL file into your context and check it out for me" - we have a freaking API specification that was well thought out, planned ahead of time with both AI input and human input from expert SQL engineers, and then carefully implemented. We use both hand written and automated test cases and build visualizations against those tests. CI/CD builders, linters, formatters.. nothing gets merged until tests pass. All of our code is manually reviewed, checked in, vuln scanned, static analysis... I'm more confident now that our data is accurate than I was when I wrote all of my SQL queries by hand, because I actually caught and fixed several of my own mistakes in this whole process. At this point, most of the Fortune 500 is integrating AI in a similar capacity.

Companies with competent hiring teams right now are looking for developers that know how to use modern tools to increase productivity and total output while still producing maintainable, high quality code. The difference between "vibe coders" and software developers isn't going to be whether or not they use AI, it's going to be whether or not they can use it effectively. The problem is that in a lot of cases, most hiring teams don't know what a good fit for the role actually looks like. Obviously if you know nothing about programming and you're live transcribing answers to interview questions, that's a pretty dishonest and disingenuous way to approach things, but using the tool to be deceptive and dishonest is different than using the tool to actually learn and make concepts accessible. Most of these people don't last long in their roles, by the way - other developers can tell when you've never heard of a singleton before.

Right now, AI sucks the most for syntax oriented programmers who capitalized off of memorizing the entire C++ STL, and impatient open source maintainers who hate having to wade through useless PRs. It's great for people like me who have written fully functional C++ applications before, because now when I want to do something cool, I don't have to spend 3 hours wading through MSDN for example usage, getting snarky responses on Stack Overflow, flat out being stuck on how to use the language to express the logical outcome I want - AI is vastly superior at retreiving and reciting this information back to you in the context of the particular project you are working on. I know what I want to accomplish, I know mostly what dependencies I'll need to get there, and I know what clean code looks like, so I can say "here's my build environment, get to work".

To the average person, this is a disruptive new technology that showed up out of nowhere. Maybe they can vaguely explain how it works, like, "It's an AI chatbot that can talk back to you and generate useful things for you based on what you say", but let's assume most people are not going to go in depth and explain to you the basis of the transformers paper or how RLHF or mixture of experts works. Most of the 800 million weekly active users of ChatGPT have no idea what the model is actually capable or not capable of. I think this is part of why anti-AI people can be so rude and condescending about this topic - it's just being confidently incorrect about something they don't understand very well...

This is a huge problem when you consider that more and more people are actively using it for things like personal decision making, evaluating interpersonal conflicts (e.g. "who's right in this text message argument?") or any kind of task that requires a value-based or moral judgement - on that issue I'll happily agree with the anti-AI folks. It's not your buddy, it's not your boyfriend or girlfriend, and it's definitely not sentient or conscious in any way.

I'm far less concerned about the implications for software development and much more worried about the implications for cognitive security. Hope you enjoyed my TED talk.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.