Hyprland icon indicating copy to clipboard operation
Hyprland copied to clipboard

monitoraddedv2 does not return the full name

Open mrwsl opened this issue 2 years ago • 2 comments

Hyprland Version

System/Version info
Hyprland, built from branch main at commit d6f1b151b2fe85ffbb131cbdd05acefc6a357e81 dirty (animations: fix m_Goal not being set after 4911 (4992)).
Date: Wed Mar 6 11:14:13 2024
Tag: v0.36.0-60-gd6f1b151

flags: (if any)


System Information:
System name: Linux
Node name: framework
Release: 6.7.8-arch1-1
Version: #1 SMP PREEMPT_DYNAMIC Sun, 03 Mar 2024 00:30:36 +0000


GPU information: 
c1:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Phoenix1 [1002:15bf] (rev c4) (prog-if 00 [VGA controller])


os-release: NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://gitlab.archlinux.org/groups/archlinux/-/issues"
PRIVACY_POLICY_URL="https://terms.archlinux.org/docs/privacy-policy/"
LOGO=archlinux-logo


plugins:

Bug or Regression?

Bug

Description

I already asked this in the PR, but want to document this as an issue as well. Please correct me, if I'm wrong.

If understand this PR correctly, a monitoraddedv2 event is added that contains more information about plugged in monitor. With monitoraddedv2 I only get the shortened make. I don't know if that's intended. According to

szShortDescription =
       removeBeginEndSpacesTabs(std::format("{} {} {}", output->make ? output->make : "", output->model ? output->model : "", output->serial ? output->serial : ""));

it should have more information than just LG, no?

Here is the relevant hyprctl output:

,{
    "id": 1,
    "name": "DP-2",
    "description": "LG Electronics LG ULTRAFINE 308MASX9J302",
    "make": "LG Electronics",
    "model": "LG ULTRAFINE",
    "serial": "308MASX9J302",
    "width": 3840,
    "height": 2160,
    "refreshRate": 60.00000,
    "x": 1928,
    "y": 0,
    "activeWorkspace": {
        "id": 3,
        "name": "3"
    },
    "specialWorkspace": {
        "id": 0,
        "name": ""
    },
    "reserved": [0, 31, 0, 0],
    "scale": 1.67,
    "transform": 0,
    "focused": true,
    "dpmsStatus": true,
    "vrr": false,
    "activelyTearing": false
}]

How to reproduce

Observe the socket and plug in a monitor.

Crash reports, logs, images, videos

No response

mrwsl avatar Mar 07 '24 07:03 mrwsl

I'm interested in this too. I believe Anoop mentioned in this other thread that it's not yet allowed for security reasons, but that the goal would be to allow users to explicitly enable access.

My use case is that in my pre-request script, I want to read a token that is stored/updated in a specific file on my filesystem to make http calls to a Vault instance, to then retrieve secrets for use in fetching OAuth tokens that I can use in my requests.

If there are any updates on this, please let us know. If you're comfortable opening this up to a contribution, I'd be happy to investigate and make an attempt at a PR.

Edit: I tried to implement a work around locally, but ran into issues using the node-vault package, which apparently internally depends on the node tty package. (https://nodejs.org/api/tty.html). I'm tempted to ask if we can enable it like Anoop suggested we could enable stream, path, url, and util in the issue I linked above, but I'm afraid we might end up in a game of infinite whack-a-mole where there are constantly packages to be whitelisted. Instead, I might suggest that we allow users to specifically enable/disable certain built-in packages in their VM, or even allow an option for running the VM with a host context instead of sandbox, perhaps with an appropriate warning about the security implications?

mtHuberty avatar Oct 03 '23 16:10 mtHuberty

Could be secure to access files inside collections? I mean, having a module that only has access to files stored on the same path that the .bru files.

pove avatar Oct 04 '23 15:10 pove

There are two access that we need to consider when executing a script inside vm2

  1. Network Access
  2. Filesystem Access

We decided to allow network access to the script by default since it is a really common thing that people who use Bruno need. In the future, we will provide a toggle to disable network access in bruno.json

We decided to disable filesystem access by default. The user can however enable it by adding the following to bruno.json

{
  "filesystemAccess": {
    "allow": true
  }
}

Could be secure to access files inside collections? I mean, having a module that only has access to files stored on the same path that the .bru files.

Yes, we will have to build a wrapper around fs before passing to vm2

helloanoop avatar Oct 04 '23 21:10 helloanoop

Just got this completed. Cutting a new release now.

helloanoop avatar Oct 04 '23 21:10 helloanoop

Available in GUI v0.19.0 Not yet available in CLI

Go give it a whirl and let me know if your case is solved.

image

helloanoop avatar Oct 04 '23 23:10 helloanoop

@helloanoop Amazing. My case is solved, as I can now do 2 things I have been missing:

  1. read an AuthToken from a file and use it.
  2. generate attachments, by reading binary files directly into a base64, setting the request body and the appropriate Content-Disposition headers.

Now need this supported in CLI :)

Thanks!

Rzpeg avatar Oct 05 '23 07:10 Rzpeg

@Rzpeg Can you share your script. You can replace the URLs and other things for privacy. Your script would be super helpful for the community to who want to do file uploads using bruno.

helloanoop avatar Oct 05 '23 08:10 helloanoop

@helloanoop Sure, here it goes:

const fs = require('fs');
const path = require('path');

const attachmentFilename = "debug-data.bin";
const attachmentPath = path.join(bru.cwd(), attachmentFilename);
const attachment = fs.readFileSync(attachmentPath, "base64");
const attachmentLength = attachment.length;

req.setHeader("Content-Type", "application/octet-stream");
req.setHeader("Content-Disposition", `attachment; filename="${attachmentFilename}"`);
req.setHeader("Content-Length", attachmentLength);

req.setBody(attachment);

Rzpeg avatar Oct 05 '23 08:10 Rzpeg

@helloanoop Does the CLI already support running scenarios that use fs module?

Edit: Just did a test run on CLI 0.12.0, and not supported yet.

Rzpeg avatar Oct 06 '23 05:10 Rzpeg

Great job @Rzpeg. I have tried this changing "application/octet-stream" for "multipart/form-data" but does not work.

eferro70 avatar Oct 06 '23 11:10 eferro70

Great job @Rzpeg. I have tried this changing "application/octet-stream" for "multipart/form-data" but does not work.

This is because multipart/form-data content type has different body structure / specification.

You need to define boundaries and construct request the proper manner: https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.2

Rzpeg avatar Oct 06 '23 11:10 Rzpeg

@eferro70 Once you figure how to solve your use case, it wou6be great if you could share it in Scriptmania - https://github.com/usebruno/bruno/discussions/385

helloanoop avatar Oct 06 '23 13:10 helloanoop

Hello, I got here via this issue. Please tell me if I should rather open a new ticket, but I found this thread quite relevant.

My problem: I am attempting to access secrets in Azure Key Vault using @azure/keyvault-secrets and @azure/identity in a pre-script, but I am facing module issues: Error invoking remote method 'send-http-request': VMError: Cannot find module 'os'. I managed to sneak past this one by installing this exported version, but then only to face the next one in line: Error invoking remote method 'send-http-request': VMError: Cannot find module 'crypto'.

What is the current plan on resolving issues like this? Maybe a list of whitelisted packages from node could be included in bruno.json?

jonasgheer avatar Oct 06 '23 19:10 jonasgheer

What is the current plan on resolving issues like this? Maybe a list of whitelisted packages from node could be included in bruno.json?

Yes, I agree with the whitelisting approach.. moduleWhitelist

Also curious to know which tool were you using before Bruno :) And how were you managing secrets before.

helloanoop avatar Oct 06 '23 20:10 helloanoop

@helloanoop whitrlisting is a neat idea. Also, while you are here, do you happen to have any ETA for fs support in CLI?

Rzpeg avatar Oct 06 '23 20:10 Rzpeg

2 hrs :)

helloanoop avatar Oct 06 '23 20:10 helloanoop

That's... At least 2 days quicker than anticipated. :) Take a weekend and... REST :P

Rzpeg avatar Oct 06 '23 20:10 Rzpeg

What is the current plan on resolving issues like this? Maybe a list of whitelisted packages from node could be included in bruno.json?

Yes, I agree with the whitelisting approach.. moduleWhitelist

Also curious to know which tool were you using before Bruno :) And how were you managing secrets before.

Awesome! In all honesty my current workflow is the good old copy/paste, which I'm trying to get away from. I am currently using Insomnia a bit on and off. There is an azure key vault plugin that I didn't get to work last time I tried (my fault no doubt). I stumbled across Bruno today and I got curious if I could get it working 😄

jonasgheer avatar Oct 06 '23 20:10 jonasgheer

@jonasgheer @Rzpeg I have published Bru CLI v0.13.0 and Bru GUI v0.21.0

These have support for whitelisting and filesystem access. There is a breaking change in config

{
  "version": "1",
  "name": "bruno-testbench",
  "type": "collection",
  "scripts": {
    "moduleWhitelist": [
      "crypto"
    ],
    "filesystemAccess": {
      "allow": true
    }
  }
}

image

Let me know if you face any further issues. Also @jonasgheer Once you get Azure Key Vault working, please share your script for the community reference in Scriptmania - https://github.com/usebruno/bruno/discussions/385

Let's keep this ticket open. Need to update docs.

helloanoop avatar Oct 06 '23 23:10 helloanoop

Oh wow, that was quick! Unfortunately I'm still facing some issues. After whitelisting "os" and "crypto" I ran into this error: Error: Error invoking remote method 'send-http-request': VMError: Operation not allowed on contextified object.. I had a go at grabbing the VM code in an attempt to narrow down the problem (on the way I even bumped into an issue you filed on the vm2 repo, so thanks for that one 😄). Unfortunately the only thing I managed to conclude is that the script crashes somewhere in the require call: const { SecretClient } = require("@azure/keyvault-secrets");. From what I have gathered I'm guessing the code in there tries to manipulate something it's not allowed to.

Granted, the module requires that you have the azure cli installed, and that you are logged in. So I'm not even sure if it really possible to have it run in a somewhat "safe" manner? I'm not sure if these sorts of modules more or less require full blown host access or not 🤔 I'm flailing a bit in the dark here. My limited knowledge on both vm2 and @azure/keyvault-secrets had me a bit stumped.

@helloanoop

jonasgheer avatar Oct 07 '23 13:10 jonasgheer

Oh wow, that was quick! Unfortunately I'm still facing some issues. After whitelisting "os" and "crypto" I ran into this error: Error: Error invoking remote method 'send-http-request': VMError: Operation not allowed on contextified object.. I had a go at grabbing the VM code in an attempt to narrow down the problem (on the way I even bumped into an issue you filed on the vm2 repo, so thanks for that one 😄). Unfortunately the only thing I managed to conclude is that the script crashes somewhere in the require call: const { SecretClient } = require("@azure/keyvault-secrets");. From what I have gathered I'm guessing the code in there tries to manipulate something it's not allowed to.

Granted, the module requires that you have the azure cli installed, and that you are logged in. So I'm not even sure if it really possible to have it run in a somewhat "safe" manner? I'm not sure if these sorts of modules more or less require full blown host access or not 🤔 I'm flailing a bit in the dark here. My limited knowledge on both vm2 and @azure/keyvault-secrets had me a bit stumped.

@helloanoop

I might be making some progress here, hold your horses 😄

jonasgheer avatar Oct 07 '23 16:10 jonasgheer

I have identified some problems. I'll start with the biggest one, which sadly might make this whole thing a no-go.

@azure/keyvault-secrets pulls in 3 libraries amongst a bunch of others:

These packages all have a line of code in them calling util.inherits:

I found a related issue in the vm2 repo (it links further to other related issues). And as far as I have collected I guess the conclusion is that the internal node modules are not "real" but they are proxied (I'm on thin ice here) and as a result you are not allowed to extend other objects using them. This is as far as I can tell "Known issue" nr 2 in the readme:

It is not possible to define a class that extends a proxied class. This includes using a proxied class in Object.create.

I guess this might be something to pick up again after the project have moved to isolated-vm? Unless I have completed missed the mark here and there is an easier solution just in front of me that I can't think of 😅

jonasgheer avatar Oct 07 '23 17:10 jonasgheer

I guess this might be something to pick up again after the project have https://github.com/usebruno/bruno/issues/263?

Yes, that makes sense.

Now, that the CLI route didn't work, I recommend trying the API route. I created a separate ticket https://github.com/usebruno/bruno/issues/454 to track this on doing it via API

@jonasgheer Can you take a stab at it?

helloanoop avatar Oct 07 '23 17:10 helloanoop

I've tried to download a .zip file (from a local server) using the post response scripting.

This is my post response script:

const fs = require('fs');
const path = require('path');
const buffer = Buffer.from(res.getBody(), 'binary');
const zipFilename = "output.zip";
const zipPath = path.join(bru.cwd(), zipFilename);
const contentLength = res.headers['content-length'];
if (buffer.length !== parseInt(contentLength)) {
  throw new Error('Downloaded content length does not match the expected content length.');
}
fs.writeFileSync(zipPath, buffer);

However, I run into the Error every time. The file is a stream (with transfer-encoding: chunked). If I don't use the post response script, I get a Response with binary text. Postman allows to download request responses. Is there anything wrong in the script or is Bru messing up the download somewhere?

xeophon avatar Oct 10 '23 15:10 xeophon

I think this is due to the underlying vm2 module not allowing write access to file system. I feel we should be able to fix this at the code level by passing a fs wrapper here into the script runtime.

The idea here is that the the vm2 delegates the write task back to the electron layer, which has access to write to the filesystem.

If anyone has bandwidth in the community, please pick this up.

cc @DivyMohan14

helloanoop avatar Oct 10 '23 21:10 helloanoop

Writing kinda works. If I rewrite the code and ignore the error, the file gets written, but is corrupt. Trying to unzip it using unzip (from apt) returns this:

error [output.zip]:  missing 291 bytes in zipfile
  (attempting to process anyway)
error [output.zip]:  start of central directory not found;
  zipfile corrupt.
  (please check that you have transferred or created the zipfile in the
  appropriate BINARY mode and that you have compiled UnZip properly)

xeophon avatar Oct 10 '23 21:10 xeophon

Makes sense @helloanoop let me have a look at this ...

DivyMohan14 avatar Oct 10 '23 21:10 DivyMohan14

@DivyMohan14 I confirm that writing indeed works

const fs = require('fs');
const path = require('path');
const filepath = path.join(bru.cwd(), 'log.txt');

fs.writeFileSync(filepath, JSON.stringify(res.getBody()));

@DivyMohan14 When you have some time, please see if you are able to implement streaming event listening on the res object

We should be then be able to use something like this

let responseData = '';

res.on('data', (chunk) => {
    responseData += chunk;
});

res.on('end', () => {
    try {
        let jsonData = JSON.parse(responseData);

        // write to file
    } catch (error) {
    }
});

helloanoop avatar Oct 10 '23 21:10 helloanoop

I did the setup that @Xceron might have, and yeah indeed the zip file write does not work as expected, and as you said @helloanoop the normal write works as expected..

The problem here does not seem to be with the file system or the write access, it looks to be some issue with the response configuration I am on it expecting to find the root cause in sometime

DivyMohan14 avatar Oct 11 '23 09:10 DivyMohan14

Looks like I found the issue, the issue is related to axios returning binary data, dealing with seems to be an issue an easier solution is to get the response as arrayBuffer instead...

Wrote a logic in prepare request to check if the Content-Type is application/zip then set responseType as arrayBuffer in axios config

A good resource for the issue: link here

@helloanoop please have a look at PR

@Xceron you can try to run the server from my branch in the meantime to check if this solves the issue ?

DivyMohan14 avatar Oct 11 '23 09:10 DivyMohan14