human-headers
human-headers copied to clipboard
How is this any better than humans.txt?
See http://humanstxt.org
It doesn't purport to be better than humans.txt. It's different. Here's what I mentioned to someone else who asked a similar question:
"This idea was partially inspired by the humans.txt idea. Something that you could just install in your browser and leave on as your browse the web. By configuring it at the header level, it makes it pretty easy to simply apply it to all HTTP requests. This allows the info to be parsed whether it's coming from an API response, or a website response, or anything else."
Headers, really? We've already have a bloat of unnecessary entries, that are being sent with every request. That's exactly why SPDY is being developed for. https://blogs.akamai.com/2014/01/spdy-http-2.html
@danekszy it doesn't claim to be performant or not add additional bloat, but I understand where you're coming from.
HTTP headers are a very available, and generally accessible layer for transport with developers. Do you have a suggestion for another layer or implementation that would be as easy?
I'm not sure what you mean by easy. I think it's exactly what humans.txt is for, and if you want that info available conveniently write an extension that seeks the file for current (sub)domain, and if it doesn't exist, trying a (sub)domain 1 level up...
@danekszy humans.txt is an additional file requiring an additional and specific HTTP request. It requires that file to be stored, hosted, and available in root of an existing site.
I can modify the service layer and simply append a few bytes. Now that information is not only available to a traditional GET request, but it's also available when an AJAX call occurs, allowing back end developers a chance to show their stuff, or even a POST request...
What's more expensive in terms of the health of the web? An additional HTTP request with every call to attempt to look for something that may or may not be there... or listening for a few bytes on the wire?
Maybe the solution needs to be more fleshed out at the server layer. Perhaps a browser extension that passes an additional accepts with each request so that it would only be sent if asked for?
How about X-Humanstxt: true? But you don't have to ask for the .txt file on every request anyway. Google won't ask for the robots.txt everytime it goes to your site. And honestly not everybody interested in the humans behind the websites and/or most of the times you know who created the stuff that you visit every day.
Additional accepts is what popped into my mind too, while reading your answer... Regarding the rest of your post: Ofc, additional request is worse - no doubt - but we're talking about really a few people that would use that extension, and it could be written so that info is being retrieved upon button click - so either way, you see that such data is optional, what can't be said about headers. These are being sent regardless if user requested them (not talking about the accept solution). I don't see your point regarding ajax requests - again unnecessary bloat. And then there's configuration issue. To setup headers you need to modify your server conf, which isn't convenient and sometimes even impossible (some shared hosting providers), while putting simple txt file is available to everyone. So to sum it all up: I think you are overcomplicating the matter.
Also there are some people, that just don't know how to configure servers. Txt is easier.
So to save 1 additional HTTP request you are adding the entirety of your humans.txt
to every request the site makes? Seems a bit of false dichotomy to me.
What if we started placing the location of our humans.txt
in our robots.txt
in a similar way to how we can include the location of our sitemap.xml
file.
Humans: http://example.com/humans.txt
or as someone else has mentioned include a single additional attribute to the header.
X-Humans: http://example.com/humans.txt
@aaronbassett I can understand what you're saying but this was never out to replace humans.txt, at all. It was meant to be a place for a few headers. That being said, I agree with you and @netlovers: the idea of a single header is a great one.
This idea originated as being in the headers so that it would be invisible to most people browsing the web. It would also allow the communication or signing to occur when dealing with the multitude of services that don't have browsers attached to them.
@danekszy I don't think this is overcomplicating things. There's a joy to discovering a signature from fellow developers. Wouldn't it be great to experience that joy, say, while examining headers from an API?
As developers we interact with services all the time that don't have an HTML front end.
Going forward with that assumption, the idea / extension would be changed to parse for a single header looking for a location of humans.txt and then notify the person browsing that it existed and/or parse it for display.
@randomdrake I don't think I enjoy some things as much as you do, but I see your point. :wink: Wouldn't requesting robots.txt suffice? If you're using api, then try api.site.com/humans.txt. You say it's not a replacement for humans.txt, but I see it exactly as that, as you try to advertise it or make it a convention. But again, what brings more joy? A header then everyone uses (and has an extension) or a spontaneous act of putting custom header by some dev? We really don't need another content-unrelated header...
@danekszy I can definitely understand the desire to not muck up the already bloated header space.
The issue is I don't want to "try api.site.com/humans.txt". I don't want to have to think about it. I want it to appear and I want to say: "Neat! Someone implemented that neat thing developers do for each other! I'm going to go check it out."
The experience right now, of occasionally remembering to also check if some file may or may not be there isn't a good one and isn't as seamless or available as it could be in my opinion.
I would like to improve it.
@randomdrake Not to go into this matter any deeper... It just seems selfish to me, to boost one's own ego by attribution, while adding unnecessary bytes to data that I requested. Some people may want to have this, some may not. That's just my opinion.
Stupid HTTP spec all getting in the way of improving the experience of developing things for the Internet.
@danekszy I'm grateful for your time in expressing your opinion.