wikiteam
wikiteam copied to clipboard
[GH-395] Port to Python 3
Fixes #395.
EDIT 2: I've removed the entire body of this comment because it's difficult to keep it up to date with progress elsewhere.
EDIT 3: I've personally unsubscribed from this thread because I was getting an email for every single commit.
To interact with this draft pull request, please consult the README on the forked repository. If you run into any problems, opening an issue there will be more effective than commenting about it here.
Il 09/06/21 23:34, Elsie Hupp ha scritto:
I had to port both
posterandwikitoolsto Python 3 in order to get this to work, so I included those in their own folders in the repository.
Thank you very much, but can you also post them upstream?
Il 09/06/21 23:34, Elsie Hupp ha scritto: I had to port both
posterandwikitoolsto Python 3 in order to get this to work, so I included those in their own folders in the repository. Thank you very much, but can you also post them upstream?
Probably yes? I figured it was easiest to do it here to begin with.
Currently what I’m stuck on is URL encoding, which can probably be simplified by porting to Requests and/or urllib3.
You probably didn't want to commit .vscode/settings.json
You probably didn't want to commit
.vscode/settings.json
I think I just forgot to revert that when I flattened my commits (though the change stopped being relevant once I started using pipenv).
FWIW I think it might be worth my migrating from pipenv to poetry for the purpose of facilitating distribution on, e.g., PyPI.
Do you have any other immediate feedback? IIRC the main issue I was running into was with the test suite, so I haven’t been able to fully validate the new code.
wikitools seems to be abandoned by its maintainer (though I haven’t tried particularly hard to reach him), so I went ahead and published the version from this pull request on PyPI as wikitools3, which allowed me to specify it as an external dependency. I did some digging, and it turned out that someone else had already made a Python 3 version of poster called poster3, so I used that as the dependency for wikitools3.
I migrated wikitools3 to use poetry, which seems to come with a lot of advantages, so I might want to migrate from pipenv to poetry here, as well.
Hi @GreenReaper—can you try the updated version?
In the cloned wikiteam directory, try:
$ git pull
$ poetry install
$ poetry run python dumpgenerator.py --xml --xmlrevisions https://furry.wiki.opencura.com
I ran the above commands myself several times, so the encoding issues should be fixed?
Thanks again for helping me find bugs!
That works, thanks! However I tried it with an older wiki, in an attempt to ensure that encoding was saving correctly, and it seems the --xml case (no --xmlrevisions) is still broken on xmlfile.write in generateXMLDump, both on this wiki and the opencura one.
For this wiki I needed to add |class="mediawiki to the search regex in getWikiEngine, because it was otherwise detected as Unknown (since we removed the generator head lines as superfluous :-) - I tried --force but it didn't seem to do anything:
# poetry run python dumpgenerator.py --xml --curonly https://zh.wikifur.com/ --api https://zh.wikifur.com/w/api.php --index https://zh.wikifur.com/w/index.php
Checking API... https://zh.wikifur.com/w/api.php
API is OK: https://zh.wikifur.com/w/api.php
Checking index.php... https://zh.wikifur.com/w/index.php
index.php is OK
#########################################################################
# Welcome to DumpGenerator 0.4.0-alpha by WikiTeam (GPL v3) #
# More info at: https://github.com/WikiTeam/wikiteam #
#########################################################################
#########################################################################
# Copyright (C) 2011-2021 WikiTeam developers #
# This program is free software: you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation, either version 3 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program. If not, see <http://www.gnu.org/licenses/>. #
#########################################################################
Analysing https://zh.wikifur.com/w/api.php
Trying generating a new dump into a new directory...
Loading page titles from namespaces = all
Excluding titles from namespaces = None
20 namespaces found
Retrieving titles in the namespace 0
541 titles retrieved in the namespace 0
Retrieving titles in the namespace 1
30 titles retrieved in the namespace 1
Retrieving titles in the namespace 2
61 titles retrieved in the namespace 2
Retrieving titles in the namespace 3
93 titles retrieved in the namespace 3
Retrieving titles in the namespace 4
183 titles retrieved in the namespace 4
Retrieving titles in the namespace 5
2 titles retrieved in the namespace 5
Retrieving titles in the namespace 6
422 titles retrieved in the namespace 6
Retrieving titles in the namespace 7
1 titles retrieved in the namespace 7
Retrieving titles in the namespace 8
36 titles retrieved in the namespace 8
Retrieving titles in the namespace 9
1 titles retrieved in the namespace 9
Retrieving titles in the namespace 10
140 titles retrieved in the namespace 10
Retrieving titles in the namespace 11
4 titles retrieved in the namespace 11
Retrieving titles in the namespace 12
49 titles retrieved in the namespace 12
Retrieving titles in the namespace 13
2 titles retrieved in the namespace 13
Retrieving titles in the namespace 14
241 titles retrieved in the namespace 14
Retrieving titles in the namespace 15
2 titles retrieved in the namespace 15
Retrieving titles in the namespace 828
0 titles retrieved in the namespace 828
Retrieving titles in the namespace 829
0 titles retrieved in the namespace 829
Retrieving titles in the namespace 100
4 titles retrieved in the namespace 100
Retrieving titles in the namespace 101
1 titles retrieved in the namespace 101
Titles saved at... zhwikifurcom_w-20210928-titles.txt
1813 page titles loaded
https://zh.wikifur.com/w/api.php
Retrieving the XML for every page from "start"
Traceback (most recent call last):
File "dumpgenerator.py", line 2850, in <module>
main()
File "dumpgenerator.py", line 2841, in main
createNewDump(config=config, other=other)
File "dumpgenerator.py", line 2361, in createNewDump
generateXMLDump(config=config, titles=titles, session=other["session"])
File "dumpgenerator.py", line 852, in generateXMLDump
xmlfile.write(bytes(header, 'utf-8'))
TypeError: write() argument must be str, not bytes
Incidentally, it says it saved in "a new directory" but it doesn't say which directory, which can be confusing.
@GreenReaper Okay!
I’m not 100% sure what you’re describing |class="mediawiki, so I didn’t do anything with that.
I fixed two more encoding bugs. I also changed the default path to be a subdirectory of the parent directory rather than the working directory (so that the default path isn’t inside the cloned repository) and added a console message that prints when the --path argument is not used:
No --path argument provided. Defaulting to:
../[domain_prefix]-[date]-wikidump
Which expands to:
../zhwikifurcom_w-20210928-wikidump
(I could probably make the argument parsing more verbose across the board.)
Anyway, in the cloned wikiteam directory, try:
$ git pull
$ poetry install
$ poetry run python wikiteam3/dumpgenerator.py [args]
Note that dumpgenerator.py is now wikiteam3/dumpgenerator.py (a change that has to do with packaging and isn’t quite relevant here, yet).
I tried the following, and while I didn’t let it run its entire course, I didn’t get any errors for the first minute or two it was running:
$ poetry run python wikiteam3/dumpgenerator.py --xml --curonly https://zh.wikifur.com/ --api https://zh.wikifur.com/w/api.php --index https://zh.wikifur.com/w/index.php
Yeah, I could have been clearer there. I meant getWikiEngine's detection, without which it refused to proceed; I changed to:
elif re.search(
'(?im)(alt="Powered by MediaWiki"|<meta name="generator" content="MediaWiki|class="mediawiki)',
result,
):
wikiengine = "MediaWiki"
There was a similar regex in checkIndex:
if re.search(
'(This wiki is powered by|<h2 id="mw-version-license">|meta name="generator" content="MediaWiki|class="mediawiki)',
raw,
):
return True
I tried the commands above and it worked for a while, then broke (trying to save the constant footer string?):
Downloaded 1810 pages
新聞:Krystal的三明治在Fur Affinity爆红, 1 edit
新聞:中文WikiFur的前綴名全面中文化, 1 edit
新聞:英语 WikiFur 迁入 wikifur.com, 1 edit
新聞討論:羽鲨, 1 edit
Traceback (most recent call last):
File "wikiteam3/dumpgenerator.py", line 2854, in <module>
main()
File "wikiteam3/dumpgenerator.py", line 2845, in main
createNewDump(config=config, other=other)
File "wikiteam3/dumpgenerator.py", line 2365, in createNewDump
generateXMLDump(config=config, titles=titles, session=other["session"])
File "wikiteam3/dumpgenerator.py", line 883, in generateXMLDump
xmlfile.write(bytes(footer, 'utf-8'))
TypeError: write() argument must be str, not bytes
Unfortunately the config file was allegedly not written so it had to start again. In fact, it looks like it was written, but if it's meant to be a text file, it's unreadable, so maybe that bit needs to be changed?
As for the footer, I tried changing the existing line close to the end of generateXMLDump that mentions the footer to
xmlfile.write(str(footer))
and will see how that goes... though on consideration, it really should already be a str, so perhaps that is unnecessary? Anyway, adjusting that line resulted in a completed XML file, so it's definitely the issue.
I’m actually running the test again myself, though I added --delay 1 to avoid a timeout.
You can pull the latest changes again if you want.
Regarding the config file, I ran into the same issue myself, so that’s another thing I need to fix, lol.
Also, by the way, it can be helpful if you refer to line numbers, like, e.g. with the blocks where you added |class="mediawiki. (I was able to find them on my own, but line numbers can make it easier.)
--delay 1 slows things down pretty dramatically, so you might want to try a smaller fraction of a second.
Aaaaand the delay printout doesn’t display fractional seconds, so I fixed that.
In this case I'm actually running the script on the same server as the wiki (it's not one we need it for; was just running it as a way to test the ability to handle multi-byte encodings), so I guess that timeout isn't so much of an issue.
I did notice that it didn't like me pausing it in the middle to go look at dumpgenerator.py, got a ten-second request timeout.
Ah, okay. FWIW I’m trying with a 0.1 second delay.
Well, it succeeded on my end. You?
Seems to be working now, aside from the config file issue. However when I try it with --images I get AttributeError: module 'urllib' has no attribute 'unquote' in line 1557 in getImageNamesAPI, when retrieving the image filenames. From what I can see it probably needs to be replaced with urllib.parse.unquote, but check that the semantics work (e.g pre-3.9 it needs a str rather than bytes, which could catch you out if you are testing on 3.9 and it works with bytes, but others are not).
You can try this again if you want to:
$ git pull
$ poetry install
$ poetry run python wikiteam3/dumpgenerator.py --images --xml --curonly https://zh.wikifur.com/ --api https://zh.wikifur.com/w/api.php --index https://zh.wikifur.com/w/index.php
The image downloads should be working now, but there’s still one more test that that’s failing on my end.
Actually, not quite yet…
The command I posted completed successfully for me—well, with --delay 0.25 to keep me from getting timed out—so hopefully it will work for you, too, now.
I haven’t really looked into why resuming is broken, yet, though.
Resume is still broken, but now you can do:
$ git pull
$ poetry install
$ dumpgenerator [args]
Note that dumpgenerator now defaults to a subdirectory of the working directory, rather than a subdirectory of the parent directory. This is because you can now easily run it from anywhere (after installing it), but it also means that if you run it in the cloned repository, Git will see the output files.
Correction, now you should be able to do:
$ git pull
$ pip install --force-reinstall dist/*.whl
Then, from anywhere, you should be able to run:
$ dumpgenerator [args]
To remove the installed package from your system run:
$ pip uninstall wikiteam3
Thanks for the continued work on the port. The diff has become unreadable with the directory rename to "wikiteam3", but I'm glad to see it no longer includes a wikitools fork. (Ideally we'd do without such a library altogether; it was introduced mostly to avoid reimplementing API continuation and such things for which there are many client libraries already.)
I'm not quite sure I understand a few of the changes, for instance the introduction of "poetry" and the apparent removal of certain historical files. It's hopefully easy enough to squash some of those commits so they become readable again.
Have you already asked more people to test this? @Coloradohusky, seeing you have archived quite a few wikis, would you be willing to try this branch?
The diff has become unreadable with the directory rename to “wikiteam3”
I’m taking a somewhat more drastic approach to the rewrite, because it makes it easier for me to work on it.
I'm glad to see it no longer includes a wikitools fork. (Ideally we'd do without such a library altogether; it was introduced mostly to avoid reimplementing API continuation and such things for which there are many client libraries already.)
I went ahead and published the wikitools fork as wikitools3 on PyPI, so it’s being pulled in via Poetry/pip.
I'm not quite sure I understand a few of the changes, for instance the introduction of "poetry" and the apparent removal of certain historical files. It's hopefully easy enough to squash some of those commits so they become readable again.
I don’t think I’ve removed any of the core code, though only dumpgenerator remotely works at this point. Poetry is a tool for packaging and deployment, i.e. on PyPI.
Both wikitools3 and wikiteam3 would be easier to manage if they were part of the WikiTeam GitHub organization rather than being pull requests on their Python 2 equivalents.
For further modularity, it might work well to break up wikitools3 into individual parts, i.e. wikitools3-dumpgenerator, etc., but this doesn’t have to happen immediately.
Have you already asked more people to test this?
I’ve only really solicited feedback from people who have wandered into this thread on their own. In order to do certain types of testing, it would be best for me to have a set of test servers, one for each wiki engine, so that there would be consistent test targets.
One of the reasons I say it would work better to publish wikiteam3 separately from wikiteam is that I already have a pull request against my pull request, and more of the same would not surprise me. Basically publishing separately would allow wikiteam3 to have its own issues and pull requests, among other things, without having to commingle them with wikiteam for Python 2. And in general people already seem to be using wikiteam3, so publishing a WIP would allow me to continue taking my time to flesh it out.
I'm happy to test things, but following this thread has lead me into a morass. My best shot is
git clone https://github.com/elsiehupp/wikiteam3.git cd wikiteam3 there is no dumpgenerator.py any more, just a wikiteam3/dumpgenerator directory, so I tried poetry run python wikiteam3/dumpgenerator/generator.py --images --xml --curonly https://zh.wikifur.com/ --api https://zh.wikifur.com/w/api.php --index https://zh.wikifur.com/w/index.php
which says please install poetry, when the problem is actually a catchall for all the imports. On Mint it cannot find http.cookiejar, and pip install http.cookiejar leads me into a kde wallet dialog, which sulks because I am Mate and don't have all the kde keyrings. This is all getting confusing for something that should just be a commandline thing.
It also doesn't like
from urllib.parse import urlparse, urlunparse
Traceback (most recent call last):
File "
I'm happy to test things, but following this thread has lead me into a morass.
I actually added a set of comprehensive installation instructions way at the top of this thread (i.e. in the pull description). My intention with those instructions is to keep them up to date as long as this pull request remains open.
Can you try the instructions up top and then let me know if they work for you?
(FWIW the instructions don’t actually install the .py files; they install the encapsulated .whl files instead. This way you don’t have to manually install any of the dependencies. If you want to make any changes to the .py files, though, you will need to install Poetry in order to build your changes so you can install them. How to use Poetry is kind of a separate topic, but I can try and answer any questions you do have.)
The instructions at the top work. I think I'd better stick with that, as I don't have the python skills to know whether faults in the development branch are mine or yours. Good luck with the port.
The instructions at the top work.
Good to hear!
I don't have the python skills to know whether faults in the development branch are mine or yours.
If you get any errors, feel free to post them in this thread regardless. And if they’re a bit long, you can post them as a GitHub Gist and then share the link here. Too much information can still be more helpful than none at all, though obviously make sure you don’t accidentally share something you’d like to keep private.
Will this PR work for resuming existing dumps? I'm currently running into #269 on the Python 2 version.
Resume is still broken
I ran into the same issue, and I think I figured out the root cause:
The custom reverse_readline code taken from https://stackoverflow.com/a/23646049 doesn't work for multibyte encoded text files (as already mentioned in the comments there). So as long as there're no special characters in the titles, everything works fine, but with Chinese (or German) titles, it'll fail.
There's a module called "file_read_backwards", which is designed to work with UTF-8: https://github.com/RobinNil/file_read_backwards
I'll try to use this instead and will open a new PR if I get it to work.
I suggest adding the dist folder to the gitignore
I suggest adding the dist folder to the gitignore
Eventually, yes, but in the meantime it's useful for people to be able to install the WIP builds.
FYI: I just downloaded and installed the latest version as described on the top of this PR, but it seems like some (well, at least one) module(s) are missing in the package: I can see util.py in the python3 branch, but it is not included in the installation, leading to a ModuleNotFoundError when executing dumpgenerator.
[edit] It's not just util.py missing, there're also a lot of missing dots in the import statements, which are all present in the python3 branch. After copying the complete dumpgenerator folder from git to the respective folder of the installed package, I was able to successfully execute dumpgenerator (without parameters so far - this will be the next step).[/edit]
On the commit that mostly works, when I specify --retries:
/m/t/2/a3 ❯❯❯ dumpgenerator --xml --images --retries 4 --api https://a-three.fandom.com/api.php
Traceback (most recent call last):
File "/home/thetechrobo/.local/bin/dumpgenerator", line 8, in <module>
sys.exit(main())
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/__init__.py", line 27, in main
DumpGenerator()
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/generator.py", line 54, in __init__
config, other = DumpGenerator.getParameters(params=params)
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/generator.py", line 2058, in getParameters
check, checkedapi = DumpGenerator.checkRetryAPI(api, args.retries, args.xmlrevisions, session)
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/generator.py", line 2188, in checkRetryAPI
while retry < retries:
TypeError: '<' not supported between instances of 'int' and 'str'
Removing the --retries arg works, though.
Good catch! You're not using the latest commit in https://github.com/elsiehupp/wikiteam3/tree/python3, though, where generator.py has only 249 lines (so an error in line 2188 is not possible). But the error is still there even in the latest commit. I just fixed that in https://github.com/t-karcher/wikiteam3/tree/python3 and updated my open pull request. Thanks for testing and letting us know!
Yeah, I know it's not the latest commit. It was the "commit that mostly works"
@TheTechRobo yeah I forgot to update the instructions. Sorry. 😬
I have no idea if this is a general dumpgenerator bug or if this is a bug with the port.
Command dumpgenerator --xml --images --api https://backrooms.fandom.com/api.php fails when downloading images with:
Downloaded 50 images
......................................................................................................................................................................Traceback (most recent call last):
File "/home/thetechrobo/.local/bin/dumpgenerator", line 8, in <module>
sys.exit(main())
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/__init__.py", line 27, in main
DumpGenerator()
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/generator.py", line 97, in __init__
DumpGenerator.resumePreviousDump(config=config, other=other)
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/generator.py", line 2538, in resumePreviousDump
DumpGenerator.generateImageDump(
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/generator.py", line 1698, in generateImageDump
filename2 = DumpGenerator.truncateFilename(other=other, filename=filename2)
File "/home/thetechrobo/.local/lib/python3.9/site-packages/wikiteam3/dumpgenerator/generator.py", line 114, in truncateFilename
+ md5(str(filename)).hexdigest()
TypeError: Unicode-objects must be encoded before hashing
Works for me:
Creating "./backroomsfandomcom-20220428-wikidump/images" directory
Downloaded 10 images
Downloaded 20 images
Downloaded 30 images
Downloaded 40 images
Downloaded 50 images
Filename is too long, truncating. Now it is: 01000011 01110101 01110010 01101001 01101111 01110011 01101001 01110100 01111001 00100000 01101011 0a31963245c169670d5b220f1c001e770.png
Filename is too long, truncating. Now it is: 01001001 00100000 01001000 01000001 01000100 00100000 01010100 01001111 00101110 00101110 00101110 02631876bb935439f45e8812fc0d36a5e.jpg
Downloaded 60 images
Filename is too long, truncating. Now it is: 01001001 00100000 01001110 01000101 01000101 01000100 00100000 01010011 01010000 01010000 01000101 0bcb6160fe36d14ba90e86fb694daee68.jpg
Filename is too long, truncating. Now it is: 01011001 01001111 01010101 00100000 01000011 01000001 01001110 00100111 01010100 00100000 01010010 0e1e50050e038a73bf5bb2060f63649dc.gif
...
But I see you're still not using the latest version. Can you please try to follow the (updated!) instructions above and try again?
Oh, I must have forgotten to update. Sorry :/
I'll retest tonight
Thanks for helping me fix it.
I feel like an idiot. 😅
(FWIW, I didn't use the poetry install. The python3 branch works fine.)
xml_dump.py At line 54. the try-except tries to perform a write operation on a file which is no longer open. It is opened in a "With" statement, but as we know, in Python 3, a "With" statement closes the file when there are no longer any commands within the statement.
This makes me think that an amateur worked on this port.
IF this block of code is supposed to append stuff after one another, this is the correct block (replaces line 44 to 54):
try:
r_timestamp = "<timestamp>([^<]+)</timestamp>"
with open(xml_file_path, "a") as xml_file:
for xml in get_xml_revisions(config, start=start):
# Due to how generators work, it's expected this may be less
# TODO: get the page title and reuse the usual format "X title, y edits"
print(
" %d more revisions exported"
% len(re.findall(r_timestamp, xml))
)
xml = clean_xml(xml=xml)
xml_file.write(str(xml))
Odd, in the python3 branch I had to fix some stuff.
@cooperdk:
This makes me think that an amateur worked on this port.
Busted!
In all seriousness, though, please do open a pull request over on the other repository with any issues you've found and fixed. There's a reason why I only described this as "mostly works". The reason I ended up maintaining the port is that other people kept replying to this thread. I don't really regularly work in Python.
I only really got the port into minimal working condition—if even that, apparently!—and I've honestly barely scratched the surface of the original code. I'm not 100% sure, but I think the issue you're describing originated when I updated some code because mypy or something was complaining about the original code not closing the file in the first place, so thank you for bringing it to my attention!
Hi @cooperdk—I went and looked at the code you're referencing, and it looks like you changed a lot more than just lines in question. Since the branch python3 is apparently broken, could you please put your changes in a fork or new branch so I can look at them and use them to fix the bug?
Looking through the code it turns out I was misremembering, and the changes I made for mypy were in a different branch, prepare-for-publication, that I hadn't mentioned in this thread because I set the project aside again before I got it fully working. By contrast, the python3 branch has far fewer drastic changes, and I've been maintaining it largely for the purpose of this comment thread, since people keep showing up and asking for help. If you want, you can take a peek at the prepare-for-publication branch instead, but, again, it's very much not done.
Hi @cooperdk—I went and looked at the code you're referencing, and it looks like you changed a lot more than just lines in question. Since the branch
python3is apparently broken, could you please put your changes in a fork or new branch so I can look at them and use them to fix the bug?Looking through the code it turns out I was misremembering, and the changes I made for
mypywere in a different branch,prepare-for-publication, that I hadn't mentioned in this thread because I set the project aside again before I got it fully working. By contrast, thepython3branch has far fewer drastic changes, and I've been maintaining it largely for the purpose of this comment thread, since people keep showing up and asking for help. If you want, you can take a peek at theprepare-for-publicationbranch instead, but, again, it's very much not done.
OK, you know what, I'll get the branch "prepare-for-publication" imported and check it out with PyCharm. I have a license, maybe it has ideas for improvement as well.
Oh @elsiehupp I noticed your pypi package which I am going to focus on instead.
Did you make it CLI capable? If not, I'd be happy to contribute with that and a compiled version for ease of use.
Oh @elsiehupp I noticed your pypi package which I am going to focus on instead.
Did you make it CLI capable? If not, I'd be happy to contribute with that and a compiled version for ease of use.
Hi @cooperdk
The PyPI package you're referencing is for a dependency of wikiteam3, wikitools3. My purpose with the prepare-for-publication branch of wikiteam3 was to prepare it for publication on PyPI. wikitools has AFAIK never had a command-line interface, and one can kind of think of the dumpgenerator module in wikiteam as kind of a limited-scope implementation thereof.
This comment thread is a somewhat cumbersome way to communicate, so if you want you can email me at github [at] elsiehupp [dot] com or message me on Matrix. Either way I'd be happy to add you as an authorized user on either or both the wikiteam3 or wikitools3 repositories here on GitHub.