Jump to content

OsakanOne

Member
  • Posts

    1
  • Joined

  • Last visited

  1. The following is a cold-copy of my responses to the video. Its my second-watch, and a lot of these were made *to* things said in the video *as* I was watching to specifically address *it* rather than this thread. You're welcome to respond, but this is an explanation as to why it doesn't fit the "tone" of the thread or address specific users here. post 1: Responding to your video as I go so expect new parts of my post/posts. I got a Tobii 4C and alt/tab and it just selects whatever I rest my eyes on. Task-switching is hold mouse 4, look at thing, release mouse 4. If I press any other buttons while its held, it won't select until I hit the button again so I can discover anything new. Lots of big thumbnails in positions which are clustered in groups and permanently consistent in their locations. I'm debating writing another so I can drag/drop/scale the windows based on what I want the recall behaviour to be. Post 2: If you use Photoshop 2018 onwards, you can use the resize option to resample the image and it will use AI and denoising stuff to "blow it up". I've managed to turn images that are 400x400 into totally usable wallpapers on a 4k monitor doing this and it is fucking witchcraft. Failing that, use Alien Blowup 3 which does the same thing. It needs a licence, but the internet has ways around that. Again, pure witchcraft. Waifu2X also does this. Post 3: Coming back to launchy, did you ever see Quicksilver? It has this insane functionality where you can give it a noun object, a verb command and an adjective modifier (tab to switch field) and you could automate lots of very complex tasks or run them without opening a program at all because it just made calls to those programs via hooks and plugins. Like that, and the help-search in OS X that searched the entire file/edit menu and command menus based on what you typed, highlighted it showing you where it would be discovered and then ran it if you hit return is one of the quality of life features I miss most about OS X. It is amazing and I implement it in all of my projects. Post 4: 10 solves the dpi stuff. You might want to look at software called Blackbox or bblean as an alternative to your discovery menu program. Barring that, ObjectBar still works under Windows 10 and is my single most favourite UI customization program ever made. It is just phenomenal. It does anything and everything you could ever want. Post 5: You can make Litestep work in Windows 10 with these tweaks: http://forums.litestep.info/viewtopic.php?pid=2330#p2330 it will also fix a lot of other programs which need to know their working-directory. As far as custom UI programs go, Litestep doesn't do anything unique or original that isn't done better elsewhere -- granted, with the same (often identical) requirements of impenetrable pseudoscripting. Post 6: "Why does a security update screw with the graphics?" Answer: There's some stuff some programs take advantage of to capture user input or manipulate it to trick you into doing things and they patched it out. Some tasks require user action, but by lying about what's on screen you can be asked to do one thing and then actually be doing another. Using proper drawcalls makes this impossible. Blame who wrote your software for wanting to be more barebones instead of going through proper channels of how the OS handles draw-calls. Post 7: Windows 10 lets you put whatever the hell you want under libraries, even remote directories and servers if they have proper shortcuts -- though its a bit funny about letting you put programs there. Shell32.dll is deliberately protected because again, a bad actor could take advantage of them and trick say, your mom, into putting something somewhere. https://www.addictivetips.com/windows-tips/change-system-icons-on-windows-10/ The icon troubles are because IconPackager works by intercepting calls without changing the file. If you want to change that stuff in Litestep, you have to change that stuff in litestep. The easy way usually is just to make a shortcut and change its icon yourself -- and then ask litestep to not show the shortcut button. Post 8: "I'm having my progress erased!!" -- no, you're using old software that doesn't have a community maintaining it anymore. Don't lock in with software that's general purpose and unmaintained. That's just shitty practice. Given you have what, like 51,000 views on this video alone and however many subs, why don't you ask your damn audience with your influence to help you put something together -- whether its maintaining an existing dead project (I wouldn't say litestep, its genuinely not worth it anymore not to start over) or to port code and features from old projects into something new and amazing. Post 9: One of the best quality of life UI additions is winkey+arrow in windows 10. If you use Chrome/Edge and pair it with something like say, Discord's web-app, Discord Hide Servers and then the OpenAsPopup, you can tile like 4 windows together any way you want and when you resize one, the other is automatically resized. This is amazing. One thing I'd love to see in OS's is passive vs active UI systems, where passive only shows what you're using, and active then reveals the inputs to you as and when you want/need based on what you're actually doing. One thing I'd ADORE is if programs had some standardized way to ingest and egress information like flow-control and then to run programs, hide them and then put that information into a dynamic window wherever or however I want like I do with server visualization tools -- with a plugin architecture for the visualizer front-end for it all. At that point, users are designing their own UI based on simple logical rules. Combine with very simple visual scripting and you could end up with some crazy powerful stuff very quickly. Post 10: "More planes, less helicopters." I love this. One thing I'd add: Why do you spend so much time on your mouse? The mouse is the epitomise of the helicopter because its used to find something and make it happen. The keyboard here, is the plane. Generally with consoles and terminals, the scary thing is -- as you said earlier with your programs listing, the lack of capacity for discovery informed action. If discovery informed action can be solved and then turned into recall informed action at the same time, to educate the user as they use their program, isn't that the most optimal UI possible? Also way to discover the oldest rule of mouse driven UI design: that the corners are the fastest object to access, unless you have multiple monitors and swinging into a corner would cause you to pass over it. Its why the start menu is in the corner. Post 11: I'm in 100% agreement, but Steve was kind of the strict asshole who would yell "this isn't good enough". The most amazing creative mind (Woz) without business smarts and constraint is unfocused and doesn't see the big picture (look up a show called Keep Your Hands Off Eizouken!, you'll like it if this is something you agree with). The problem there usually is the big picture goes to their heads and eventually they stop supplying useful constraints. I view Jobs as a cautionary tale for this, because he's the textbook example of it. Post 12: If you're interested in this fractional wastage of time, look into Agile software development. Specifically, its principles. They're meant for businesses, but they can be applied to UI design here. Specifically, the stuff it says about waste. http://adaptagility.co.uk/wp-content/uploads/2018/03/7-Wastes.001.jpeg Swap people for inputs here. Swap the customer for the user. Think about flow-state and stuff like that. Post 13: Coming back to using your influence and a possible product to come out of it, isn't the most optimal product something with the power of LiteStep, but then the ease of access, modification and use as something like ObjectBar with its UI-driven customization -- and then some simple scripting either via nodes or proper scripting so users can add their own elements? Post 14: Oh fuck, separating the common-shell elements all together from the OS the same way the browser was is super overdue. I'm so glad someone else recognises this. Post 15: Wow, good on you for recognising that flat themes do have a place. As someone who uses one and loves it, I try to stip any and all unnecessary information and I use the UI strictly to break things up and split them where and however I possibly can. I gave up developing one because Windows 10 was "enough of a compromise" for me in terms of raw themes. Its not perfect but a lot of things about it I do genuinely really like. I love this idea of metrics driven interface design. Something like say, not only studying how quickly people perform, but also how quickly they learn would be incredibly helpful to find GOOD STRONG rules about user onboarding. Post 16: My trackball is built into my keyboard and is where the numpad would be. I also have a "clit mouse" thing for my right thumb under spacebar. Trust me, it cuts down on these kinds of timings problems tremendously -- and I'm 100% convinced that with further refinement and some eye-tracking stuff, this could be cut down to nearly zero for acts which only require a single click or a "peck" with gaze. Post 17: What you're asking for is a way of communicating intent that has nothing to do with language, on a system that at its fundamentals, is defined by language. That's... Really not very smart. To explain, if you want to select a thing from the highest number of possible things via elimination, the best option is to pair a keyboard with an algorithm, do auto-complete and then pair that with a system for combining those selections into higher things. A mouse isn't designed for this task. Its designed to select things which are visible, and unknown to a user for the purpose of discovery and learning, or to select an absolute value within a field, when a relative value will not do (ie, a position in a 1D space like a volume value, a 2D space like a UI, a 2.5D space like a modern UI, or a 3D space like an ingame world). The mouse is supposed to be for selection, adding/removing things from a selection, or travelling through a space. For just selecting stuff to execute, its actually very very poor because it is just an object which represents a projection of the eye to the screen -- and the differentiation of the eye for reading from the objects on screen via the human hand. What you're asking for ultimately isn't for better mouse control but for something fundamentally more absolute in the same way a touch-screen or gaze is more absolute - but with the precisional discipline of a mouse. The closes to this I've managed is using eye-tracking and head-tracking together so I use my eyes for coarse movement, and my head for gentle refinement. Flat out, you're too mouse dependant and that's one of the first things people unlearn when using computers. As you get faster and you memorise the tools, you should slowly be shifting away from using the mouse unless it is essential for that task to using keys. But the mouse is your preference, and you're hitting the dry limits of the mouse as it stands. Post 18: "Do you really want to memorise all these hotkeys?" You're neglecting the 80:20 rule of design to try and back up your argument here. And yes, I DO. Why? If I use something often, I want not to have to think about it. That's called muscle memory and automaticity (https://en.wikipedia.org/wiki/Automaticity) -- you can't develop it on a mouse because its contextually driven and demands absolute input using a relative input device which is a mental level of abstraction that shouldn't have to exist in the first place but does because of the limits of technology. If you're angry, go develop some automaticity. You have to do some of the leg-work, not just the software. Do I learn every single one? No. I hit winkey, press dev and hit enter brainlessly, and know device manager will start. Dis does the same for display. This is true of every program I can recall without discovery (turning discovery actions into recall actions is onboarding) I'm not thinking about actions, they're just happening. From three letters alone, I have 999 possible options and I don't have to get them exactly right because its ranking them against frequency of access and accounting for my common errors. If you want the maximum done and you're hitting dry-limitations of an input device, either try another or alter the UI. I genuinely want to see both the keyboard as we know it and cursor-systems as we know them change enormously, as well as UI design -- but UI design is just one part of that whole mess, not the entire thing. Post 18: Mouse-Gestures, while very cool are limited to somewhere around 50 major inputs, as points in order on a 3x3 grid. You could bump this up by having that change based on what key you hold as you perform them (for example, holding CTRL or SHIFT or ALT when initiating them in an order or just at all) could bump it up to around 350. The mouse is, at its heart, a relative input device, and it measures directions of travel. It just happens to be used as a pointing device, which other things are much much better at. Using it for what its mechanically best at it is the best use of the mouse. Most of these gestures around the 50 minute mark are still slower than hotkeys. They're very nice and look very cool and I want to see more of this in operating systems but this is something which has users memorising something which has no link to language because its tied to space to create absolute actions. What you'd really need is a way to onboard spaces and velocities and positions within those spaces into language maybe based on probability or something. Its a shame we don't have a dedicated button on the mouse only for gestures. You call it a wand, so it would be almost like a spell button or cast button. That I would use the shit out of and it sounds fucking awesome, especially if you link it to shit like pie-menus which also use the relative locational strengths enormously well. Post 18: On the lightsabre: What would be fucking awesome is if the system knew if you had your hand ON the cursor or not using a very cheap simple capacitive sensor, and then it just knew if what you pointed at was what you were looking at, or if you were moving the mouse toward or away from that point to create some kind of new contextual input that modifies other things. A cursor and a cursor, with a frame of reference. One your goal, the other how you get there and the third, the nature of the space you do it in (eg, clicking buttons vs selecting objects/mass/pixels in a space). Post 19: Per program context actions are common in most hardware now. My Razor N52 Nostromo is a godsend for this stuff, since I can based on the application I'm in, launch another, and then use its arrow-keys as shift-states to load up to four other sets of things. I'd be much happier if it would just load up like a visor that slides down from the top or something, and it would tell me what was on each key as I was using it though so I knew what they did before I hit the button to aid in my own self-onboarding. Post 20: Oh man that is a mood. I overcame the same problem by putting next/previous tab on two of the buttons on my trackball which maximises my ability to browse one-handed. I know that sounds suspect (not a dude, but I know that's the 'default' assumption online), but I have an injury in my hands that means I need to rest them so being able to rest my left-hand easily is a godsend and gives me an excuse to do research instead of work -- which in turn, helps me work better because y'know, research. For the record, I close the tabs with middle-click or CTRL + W. As for why I use chromium/edge, I can banish every UI element except for the titlebar (which I would love to get rid of via some kind of auto-hide or toggle, but haven't found a way to yet) and I use a lot of quality of life plugins Examples include Picture in Picture for youtube, hiding discord's sidebar, some stuff for privacy, saving youtube videos, and a tree-style tab-viewer that lets me see which websites spawned which websites based on how I was navigating. I also use the bookmark bar to a ridiculous degree because its all 100% searchable with custom names and icons. It gets nuts when you pair the name of a website, then hit tab, then type what you want to search using that website's own search-tool and I use this every day for wikipedia, librarygenesis (free books!), scihub (free scientific whitepapers), FreeFullPDF (another godsend) and archive.org to rip google books and google scholar's content. They're amazing resources and I recommend them to everyone who reads this, especially if you work in education or the sciences because none of what you pay to read a journal or paper goes to the person who wrote it, it goes to the publisher who monopolize that shit. Fuck those guys I hate them. Post 19: One of the things that AI would be good for is categorizing user input into clusters. Howard Mouskawitz when asked to come up with the best pickle discovered there is no perfect pickle: there are only perfect pickles -- and that things fit into clusters of category - and it hits me that we're not in the same cluster, but are both very fed up with the status quo. From that, couldn't you use heuristic assessment to give an AI all of the limitations of human hands and eyes and thinking -- and then just generate UI shit at random and have the other AI that's limited mechanically and mentally based on those clusters to how we work try to navigate those systems and then rank them using the metrics system you proposed? You'd get some really wild innovations, especially if some of the onboarding stuff is also imitated and ranked based on how quickly users learn new things from scratch but it has to be done right or it would make things worse, not exponentially better. Post 20: Windowblinds! Fuck yeah that wonderful BeOS theme you showed first I used that for YEARS! I love that little bump. Its so cute. In theory if Windowblinds works, every theme should work because they just hook into the features of Windowblinds itself. If it works on 10, they work on 10. Post 21: Lots of themes have low contrast because a lot of users are working in the dark, literally. I think in the same way Windows 10 picks your colour and shows you the feedback immediately, there should be like an n-colour option in Windowblinds for contrast, not just colour for windows 10, so you can then enhance and make the colours pop using a fucking slider, not editing values or fucking about with menus somewhere and spending who knows how long dicking with stuff. Better still, what if you could seamlessly chop elements from different themes and then lump them together based on your preferences and then do some software tweaks to those values without opening any image editing software to make the theme you want out of the chunks you want? That should be looooong overdue. At some point, could you go over the themes you do like and what you like about them -- and then also what you dislike, discussing the usability not of the locations of objects unless they differ massively from the defaults -- the colours, fonts, shape language, trends you liked/disliked from different eras, and the stuff like that? I would watch the shit out of that video like two or three times and I'd gladly sub to a patreon to motivate you into making it if you had one. Post 22: The good Linux themes are hidden in /r/unixporn. Post 23: The terminal is faster in some cases, but its got zero discovery and demands recall or self-managed discovery which flies in the fact of everything we've been doing for the last 40 years with UI design, period. Whoever solves that problem is going to have fixed UIs forever. Sorta like, something which reads things exposed, and constructs a UI stepping backwards based on the logical rules of a workflow you give it but then allows deliberate exceptions to those rules wherein those rules are sub-optimal due to their consequences or frequency of access. Post 24: Oh shitting christ, a flow pie-menu that has islands rather than terminations, that's fucking genius right up until text or a search is needed. How do you solve the problem of search? The other problem too is when you have islanding, you have to work back through the space you've created -- and in turn, its very hard to go backwards in these systems because they tend to be destructive in nature (ie, you can't revert to the previous step you were in without an entirely new action from the cursor, or to request an undo operation -- it has no resting state.. so you could use the mouse-wheel to rewind or push forward through previous states and examine the last state -- and then to list the history of actions as a column somewhere onscreen. I guess then, the solution would be to engineer a resting state for it -- so as you swipe, your past actions stack somewhere that's reachable -- and then the option to proceed forwards or backwards could be defined via what modifier you hold (shift/control/alt). Its that tradeoff between discovery and output. the fastest input is strictly relative in direction from an origin on a 2D plane. But you get more inputs by adding further options -- say for example, you are shown the conclusion of the direction you select, like a branch or a tree, and the ones you get nearer to you can see -- which is even LESS blind than tabs and buttons! Post 24: I think the problem with the console or terminal is its infinite inputs for non-infinite outputs. UI use has to be thought of an elimination of potential options, drilling down into specifics by using a convention people understand. If there are nine potential options, you could get by using only one input, because each space of a character is up to however many there are on the keyboard. If its thousands, or even a value rather than a specific act, things get more complicated. "How much" of something do you want? Where does it begin and end? These are things that need to be solved. There are lots of soft so-so solutions for these problems, but not many "very good" ones. I've had success pairing the trackball with gaze recognition -- so my gaze picks a slider, and then what I do with my hand changes the value while CTRL is held. Its VERY fast but not 100% reliable, as it depends on windows knowing what a slider is for all programs consistently. That's a lot easier on a mac because the UI elements are all built the same way in most cases, and much harder on windows. Post 25: One of the best examples of a pie-menu I've dealt with is a 3D modelling app called Modo. Some combinations of hotkeys call up menus, and you drag and release the button. It is blindingly fast but instead of using big icons for everything, it uses text, and keeps things at 8 or less options. If you rest a cursor in the direction without releasing and click instead, you can drill down into finer options -- eg, pushing in that direction and then scrolling to define the value of an operation, then releasing is a thing you can do. Its very cool stuff. Post 26: I'm currently working on a game and I want to try using some of my UI ideas for it. At some point, I'll put out an example or a demo but until then, I'm mainly working on the guts of the game itself. I'd be happy to work with others on a project but only as a designer, given my time is currently taken up as a programmer and I'm already transitioning toward a software-designer role anyway. Post 27: You can do the start-menu anywhere trick using ObjectBar, and design your own start-menu however you want based on whatever rules you want. Combined with some simple scripting to generate shortcuts based on simple rules based on what you're doing inside a given folder, and it is horrifyingly efficient. If you made it this far, power to you. For anyone curious, my peripherals of choice are a Tobii 4C, 5 monitors (three for work, two for other set above my gaze so looking up requires more mechanical work and thus I spend less time wasting time on them for things like chat, etc), a Nostromo N52 Speedpad with a lot of custom scripts, a modified Logitech N570, a modified HHKB2 Lite and lots and lots of scripting.
×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.