|
|
|
|
| Welcome, Guest | Home | Search | Login | Register | |
| Author | Building firefox/mozilla/etc from source (Read 94558 times) | ||||||||||||||||||||||||||||||||||||||||||||||
|
Jatoba
256 MB ![]() ![]() ![]() ![]() ![]() Posts: 270 System 9 Newcomer! |
Reply #60 on: July 21, 2024, 21:41
@lauland I'm not very surprised that the time it took to build under Classic was that much longer than it does on native Mac OS. I did some benchmarks recently, comparing performance between the two, and, simply put, it's a clear-cut case of native performance being a lot better, as expected, at least on the same machine. I think all the notes you left behind and the effort you poured into this is invaluable, and has already proven to be of outstanding help. It honestly secures a path forward to improving the web experience for the whole of Mac OS into the future. I will compile Classilla following these notes as soon as I can. For a "next project" that has nothing to do with anything web-related, in case you get interested, there's a "simple" one that actually might be quick-ish, but that would be incredibly helpful: md5classic is a newly-developed MD5 checksummer made for all versions of Mac OS, from System 1 to Mac OS 9.2.2, to Classic in OS X 10.4.11. Its main use for us is to check file integrity after a download. It's extra convenient that the Garden lists the MD5 hashes for the downloads automatically for us to compare against. It's a phenomenally-good app created by legendary cracker/hacker/developer "siddhartha77" from the Garden. As it stands, this effort is as good as it gets, except for one thing: HFS+ support. To be more precise, we would want to give it the ability to do MD5 hashing also on files that are bigger than 2 GB, which are AFAIK only possible on HFS+. The MD5 hashing algorithm would likely be "cloned" and "separated", because there might be a small performance loss to rework the algorithm to properly handle such bigger file sizes, and we want to avoid that considering the nature of the program (there were even hand assembly optimizations for both PPC and 68k!), so there would be a boolean option added to "enable 2GB+ file hashing" or similar to toggle which of the two algorithms to use. Main issue, though, is that sidd, the author, is burned out on it for now, and we aren't quite sure yet on how to go on about adding HFS+ support to it. We might also want to disable the option for System versions incapable of handling HFS+. He made all source code available, and it's easy to compile, I did so myself without modifications. Does that idea sound interesting enough, @lauland? |
||||||||||||||||||||||||||||||||||||||||||||||
Last Edit: July 21, 2024, 21:43 by Jatoba
|
lauland
|
512 MB ![]() ![]() ![]() ![]() ![]() Posts: 674 Symtes 7 Mewconer!
Reply #61 on: July 22, 2024, 05:00
|
@jatoba, I will definitely take a look. Anytime I hear about some piece of software that only handles up to "2 gigs" my first thought is integer size: For those not familiar, 32 bits can hold up to the number representing 4 gigabytes...take away a bit for the sign and you then get the "2 gig" limit you see that pops up over and over. So, if you are using a C "int" type, or even a "long" (which on 32 bit machines tends to be the same as an int), you may need to go with a "long long" or a "u_int64" (or whatever your particular dev environment slash compiler supports...if any...). And so that's the first place I'd look. If I see something like "long file_size;", I'd try "long long file_size;". Problem is that is only part of it... They may be making OS api calls that only accept 32 bit int sizes...very commonly you'll find the culprit is in something like the "seek()" family...that needs to be changed to a 64 bit version too. (And, if it isn't an OS api, it'd be part of the standard library of the compiler...) Now...the fun part gets to be that even once you've taken the above into account, and, possibly, fixed them all, you need to worry about those particular variables being passed between functions and/or stored in structures. All relatively easy to handle if it's all internal to the app in question... ---- I'm sure you (and maybe others) probably recognize some of what I'm moaning about as being exactly what I had to deal with in my attempt to extend HFVExplorer to handle files and disks larger than 2 gigs in size. In that case I attempted to move it to 64-bit Windows (foolish in hindsight) and ran into no end of grief about where it'd sloppily try storing pointers (64 bit) into longs or "DWORD"s (both 32 bits, a choice by Microsoft to ease porting). I got so deep into the weeds of MFC and message passing that I decided it was no longer "fun" and not something I wanted to spend my time learning further about. With all the above said...I'm willing to bet md5classic is easier to understand and a simpler design...so will take a look and get back to you. (Not that md5 checksums personally interest me in the slightest...but some stubborn 32 bit integer code is precisely the thing that gets my mind spinning...and, for whatever reason, 64 bit integers have been fascinating me lately...ooh...shiny!).
|
lauland
|
512 MB ![]() ![]() ![]() ![]() ![]() Posts: 674 Symtes 7 Mewconer!
Reply #62 on: July 25, 2024, 00:36
|
Ok, ignoring basically everything I said about the size of int variables in the previous post...for now...the problem in supporting 2g+ sized files would be in maintaining compatibility with the older versions of the OS...with a single application. As in, it wouldn't be too hard to use #ifdef's to allow using the newer OS calls that support the larger sized files, but then you'd have two versions of md5clasic...one which supported them and one which didn't. That isn't too hard to do, but, from what I can see about the app and discussion related to it, kinda defeats the purpose...or at least what the original author was intending. So that explains some of his reluctance. The correct way to do it, and keep a single version of the app, could be to use Gestalt (of course), and some of the code he already has to detect if the relevant traps were available, and only call them if you are running on a newer OS that supported them (MacOS 8.1+ I believe). This is a bit of a pain but doable.
|
lauland
|
512 MB ![]() ![]() ![]() ![]() ![]() Posts: 674 Symtes 7 Mewconer!
Reply #63 on: August 01, 2024, 16:29
|
Clean build of Classilla on my G4 with the dual 1.6ghz G4 board took 2 hours 10 minutes, in MacOS X. So, if it could boot MacOS 9 natively, it'd take considerably less, probably extremely close to what the original author saw on his MDD.
|
Bolkonskij
|
Administrator 1024 MB ![]() ![]() ![]() ![]() ![]() Posts: 2023
Reply #64 on: August 04, 2024, 14:53
|
Very nice! Any plans to do anything meaningful with it going from here? Jatoba suggested to kick out some functions nobody really needs and opt for speed instead. Is "hot-rodding" the browser something that'd interest you?
|
lauland
|
512 MB ![]() ![]() ![]() ![]() ![]() Posts: 674 Symtes 7 Mewconer!
Reply #65 on: August 08, 2024, 17:38
|
I don't have any further plans personally, and feel I've done my part showing how it can be done by mere mortals, and making it at least a little easier...and inspiring @garambo to make his truly awesome building guide (which I've added as a PDF to the MG page)! If anyone else wants to blaze new trails though, I will definitely be avail to help/join.
|
lauland
|
512 MB ![]() ![]() ![]() ![]() ![]() Posts: 674 Symtes 7 Mewconer!
Reply #66 on: August 12, 2024, 23:14
|
One thing I would be interested in is trying to get the beast to run on 8.1, and if we could do that then on to 7.6, etc etc. This'd likely require stubbing out some of the Unicode support. I did something similar for Gopher and/or Jabbernaut (can't remember which!). It'd still likely require Appearance and most definitely Open Transport...for now... If anyone would be interested in joining such an effort, that'd be really cool...
|
wove
|
1024 MB ![]() ![]() ![]() ![]() ![]() ![]() Posts: 1363
Reply #67 on: August 13, 2024, 16:58
|
This article is written by Classichasclass. <http://oldvcr.blogspot.com/2023/10/teaching-apple-cyberdog-10-new-tricks.html?m=1> As is probably well known I am a big fan of Cyberdog and OpenDoc. OpenDoc was very forward looking in that it was an assemblage of shared parts/libraries. Updating a part updated the whole system. In OS X an update to Safari/webkit not only update Safari, but also updates Mail and RSS feed readers and anything else that relies on the webkit shared library. Leopard webkit, updates webkit to about the same as the webkit found in El Capitan, which also updates Mail and Netnewswire Feed reader that I use. There is an OpenDoc part (Blake} that switches out the Cyberdog rendering engine for the Internet Explorer 5 rendering engine. That of course is not all that grand in 2024, but does show it is possible to use alternative rendering engines. Well anyway I who am not a programer and do not wish to be a programmer do wonder on the possibility of just creating an classic Mac rendering engine that would create a broader set of improvements to the web experience.
|
ClassicHasClass
|
32 MB ![]() ![]() ![]() Posts: 39
Reply #68 on: August 14, 2024, 04:41
|
I'm glad you enjoyed the article. That one was particularly fun to write. However, you'll notice there is an insane amount of boilerplate code involved in doing so, it would only be helpful to run Clecko within Cyberdog (which already requires an excessive amount of memory) since there aren't really many other OpenDoc-aware container applications, and the SOM components would need a connector piece to bridge to XPCOM which would have to be written from scratch. It could possibly be made to work but the effort to do so would be substantial and I doubt it would run very well.
|
Bolkonskij
|
Administrator 1024 MB ![]() ![]() ![]() ![]() ![]() Posts: 2023
Reply #69 on: August 15, 2024, 09:35
|
I feared you'd say that @lauland :-) It seems like an intimidating big project - and one that requires at least advanced Mac coding skills. Not too many people out there who could do it. Proving that (a reduced) Classilla could be running on System 7.6 sounds like a real challenge but then it'd easily become the top browser for System 7.
|
lauland
|
512 MB ![]() ![]() ![]() ![]() ![]() Posts: 674 Symtes 7 Mewconer!
Reply #70 on: August 15, 2024, 17:55
|
Getting it to run on System 7.6 sounds at least within the realm of possibilities (in this Universe), or at least worth a try...if there's other folks interested in a real attempt, let me know. The first step would be building it yourself in MacOS 9 or X, and @garambo and I have made/shown that easier at least. Heck, the very first thing we should try is just running the built binary on MacOS 8.1...I'm sure the very first thing you'll see is system errors for missing libraries (Unicode?) that 8.5 has, etc. ...As compared to the HUGE amount of effort of converting it to a CyberDog plugin. Mondo cool idea, but the need for specialized SOM and OpenDoc programming knowledge (as rare as hen's teeth these days!) probably makes it a lost cause...and, as @ClassicHasClass notes, probably would be disappointing as far as speed and actual usability. (And more knowledge of ancient Netscape/Classilla guts than exists these days...) Personally, I'd LOVE to learn more about OpenDoc and SOM, always wanted to, especially since it is used extensively in OS/2 and Copland (fascinating, but hardly useful)...but my brain only has so much space in it, and I have to pick and choose...and usually I only pick things that are more portable, or useful, or work on more platforms, open up new possibilities for me or the community, or also work on modern systems...etc. And OpenDoc just doesn't make the cut unfortunately.
Last Edit: August 15, 2024, 17:57 by lauland
|
cballero
|
1024 MB ![]() ![]() ![]() ![]() ![]() ![]() Posts: 1176 System 7, today and forever
Reply #71 on: November 04, 2024, 18:28
|
Wow, Lauland; now that's an interesting concept! ![]() I wonder if.. Mac OS 8.1 might serve as a bridge for such a development? I know 8.x was not made available to clone hardware, so might there also be some kind of jump there as well? it led the G3-only code, meaning that G3 Macs were excluded from running any form of Mac OS 7, even 7.6.x, altogether, so I'd imagine it may hold some secrets worth exploiting in the realm of porting? Just thinking out loud a bit on this one.. ![]() If not, then we leave such lion lie, since the hardware may not support it anyway (hence why I thought of Mac OS 8.1 for early G3 Macs which may have a better shot at running a further developed Classilla hardware speed-wise anyway; pretty much all of the '97-'98 G3 models, like my own Beige G3 desktop, the original Bondi iMac, the ogininal Kanga G3 Powerbook and several other early G3 Mac and Powerbook models
|
|
Pages: 1 ... 3 4 [5]
|
| ||||||||||||
|
© 2021 System7Today.com. |





