Mooc fait par l'université de Genève sur la gestion du TDAH
For those who don't understand reverse proxies, this is a good explanation
This seems like a good idea! I'm curious to see the challenges at scale
un moteur de recherche distribué self hosted
On the latest fedora release which brings gnome 48, appindicator.js is kind of broken using the previous extensions pattern. This one fixes it. Found on https://bbs.archlinux.org/viewtopic.php?id=304357 where people are still debugging stuff for the community ahead of mainstream releases
quand le FMI discrédite la technique économique plébiscitée par les politiques :-/
un site super qui permet de faire de l'equalizing spécifique à chauqe hardware
_nft()
{
    local cur prev
    COMPREPLY=()
    cur="${COMP_WORDS[COMP_CWORD]}"
    prev="${COMP_WORDS[COMP_CWORD-1]}"
    local families="ip ip6 arp bridge inet netdev"
    local sets=$(nft list sets | grep -i "set" | awk '{print $2 }' | tr -d ' ')
    if [ $COMP_CWORD -eq 1 ]; then
        COMPREPLY=( $(compgen -W "add flush list" -- $cur) )
    elif [ $COMP_CWORD -eq 2 ]; then
        case "$prev" in
            "add")
                command="${prev}"
                COMPREPLY=( $(compgen -W "element map table set" -- $cur) )
                ;;
            "list")
                COMPREPLY=( $(compgen -W "chains ruleset set sets table tables" -- $cur) )
                ;;
            "flush")
                COMPREPLY=( $(compgen -W "table" -- $cur) )
                ;;
            *)
                ;;
        esac
    elif [ $COMP_CWORD -eq 3 ]; then
        case "$prev" in
            "table")
                COMPREPLY=( $(compgen -W "${families}" -- $cur) )
                ;;
            "set"|"element"|"map")
                local tables=$(nft list tables | awk '{print $3 }' | tr -d ' ')
                COMPREPLY=( $(compgen -W "${tables}" -- $cur) )
                ;;
            *)
                ;;
        esac
    elif [ $COMP_CWORD -eq 4 ]; then
        local tables=$(nft list tables | awk '{print $3 }' | tr -d ' ')
        if [[ "$families" =~ "$prev" ]]; then
            local tables=$(nft list tables "${prev}" | awk '{print $3 }' | tr -d ' ')
            COMPREPLY=( $(compgen -W "${tables}" -- $cur) )
        fi
        if [ "${COMP_WORDS[1]}" != "add" ] && [[ "$tables" =~ "$prev" ]]; then
            COMPREPLY=( $(compgen -W "${sets}" -- $cur) )
        fi
        if [ "${COMP_WORDS[1]}" == "add" ] && [ "${COMP_WORDS[2]}" == "element" ] && [[ "$tables" =~ "$prev" ]]; then
            COMPREPLY=( $(compgen -W "${sets}" -- $cur) )
        fi
    fi
    return 0;
}
complete -F _nft nftLe Li-Fi a été installé dès 2022 à la maternelle Chantefleur (L’Isle Adam), censée être un exemple en Île-de-France. Après quelques mois, le bilan s’avère toutefois « contrasté » :
Installation et performances : Quatre salles de classe et la bibliothèque sont équipées d’émetteurs Li-Fi fixés en hauteur. Les tests montrent un débit jusqu’à 86 Mbit/s en téléchargement (bien inférieur aux promesses théoriques) et de fréquentes déconnexions.
Tablettes inutilisées : Les tablettes LiFiMAX Tab, fournies pour les enfants, ont été rapidement rangées au placard. Elles souffrent d’une autonomie jugée « famélique » et posent des soucis logistiques (aucun hub de charge, stockage compliqué, gestion du matériel dans un environnement avec de très jeunes élèves).
Manque de suivi : Oledcomm, l’entreprise responsable, n’est pas revenue pour résoudre les problèmes. La municipalité semble peu impliquée dans la maintenance. Une tablette pend littéralement au bout de fils électriques, sans réparation.
Usage pédagogique remis en cause : Les professeurs ne souhaitent pas multiplier l’exposition aux écrans pour des enfants de 3 à 5 ans, préférant d’autres activités plus adaptées.
Conclusion : Le Li-Fi n’a pas gagné d’autres écoles, d’autant que la loi Abeille ne bannit formellement le Wi-Fi qu’en crèche et garderie. La technologie semble davantage promise à des domaines comme l’aérospatial ou le militaire, où le Wi-Fi n’est pas envisageable.
petite cheatsheet sur le code propre qui est très bien écrite
Take an in-depth look at eBPF, a technology that allows you to run sandboxed programs within the Linux kernel.
A playground for Prometheus relabeling rules
One of the newer tricks in systemd is that you can do "IPAddressDeny=" and then "IPAddressAllow" and keep a program from exchanging traffic with the rest of the world. For a program that's only ever supposed to talk to the local network, this was a good idea.
openssl req -nodes -x509 -sha256 -newkey rsa:4096 \
  -keyout example.org.key \
  -out example.org.crt \
  -days 356 \
  -subj "/C=NL/ST=Zuid Holland/L=Rotterdam/O=ACME Corp/OU=IT Dept/CN=example.org"  \
  -addext "subjectAltName = DNS:localhost,DNS:example.org" Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today — but the core of science fiction, its essence, the concept around which it revolves, has become crucial to our salvation if we are to be saved at all.
Command line tool for URL parsing and manipulation
Replace the host name of a URL:
$ trurl --url https://curl.se --set host=example.com
https://example.com/Create a URL by setting components:
$ trurl --set host=example.com --set scheme=ftp
ftp://example.com/Redirect a URL:
$ trurl --url https://curl.se/we/are.html --redirect here.html
https://curl.se/we/here.htmlChange port number:
$ trurl --url https://curl.se/we/../are.html --set port=8080
https://curl.se:8080/are.htmlExtract the path from a URL:
$ trurl --url https://curl.se/we/are.html --get '{path}'
/we/are.htmlExtract the port from a URL:
$ trurl --url https://curl.se/we/are.html --get '{port}'
443Append a path segment to a URL:
$ trurl --url https://curl.se/hello --append path=you
https://curl.se/hello/youAppend a query segment to a URL:
$ trurl --url "https://curl.se?name=hello" --append query=search=string
https://curl.se/?name=hello&search=stringRead URLs from stdin:
$ cat urllist.txt | trurl --url-file -
...Output JSON:
$ trurl "https://fake.host/hello#frag" --set user=::moo:: --json
[
  {
    "url": "https://%3a%3amoo%3a%3a@fake.host/hello#frag",
    "parts": {
      "scheme": "https",
      "user": "::moo::",
      "host": "fake.host",
      "path": "/hello",
      "fragment": "frag"
    }
  }
]Remove tracking tuples from query:
$ trurl "https://curl.se?search=hey&utm_source=tracker" --trim query="utm_*"
https://curl.se/?search=heyShow a specific query key value:
$ trurl "https://example.com?a=home&here=now&thisthen" -g '{query:a}'
homeSort the key/value pairs in the query component:
$ trurl "https://example.com?b=a&c=b&a=c" --sort-query
https://example.com?a=c&b=a&c=bWork with a query that uses a semicolon separator:
$ trurl "https://curl.se?search=fool;page=5" --trim query="search" --query-separator ";"
https://curl.se?page=5Accept spaces in the URL path:
$ trurl "https://curl.se/this has space/index.html" --accept-space
https://curl.se/this%20has%20space/index.htmlIt's quite easy to compile the C source with GCC:
$ make
cc  -W -Wall -pedantic -g   -c -o trurl.o trurl.c
cc   trurl.o  -lcurl -o trurltrurl is also available in some Linux distributions. You can try searching for it using the package manager of your preferred distribution.
make, just like on Linux.Development files of libcurl (e.g. libcurl4-openssl-dev or
libcurl4-gnutls-dev) are needed for compilation. Requires libcurl version
7.62.0 or newer (the first libcurl to ship the URL parsing API).
trurl also uses CURLUPART_ZONEID added in libcurl 7.81.0 and
curl_url_strerror() added in libcurl 7.80.0
It would certainly be possible to make trurl work with older libcurl versions
if someone wanted to.
trurl builds with libcurl older than 7.81.0 but will then not work as
good. For all the documented goodness, use a more modern libcurl.
"for begginers" haha, I'd like to see friends of mine try this one out
I was too lazy to write anything remotely as clean as this so I asked Kagi's integration of Mistral Large:
Sure, here's an example of how you could create a
systemdtimer to automatically renice existing Google Chrome processes.
- First, create a new service file. You can use any text editor you like, but for this example, we'll use
nano:sudo nano /etc/systemd/system/renice-chrome.service
- In the service file, add the following lines:
[Unit] Description=Renice Google Chrome processes [Service] Type=oneshot ExecStart=/bin/bash -c 'for pid in $(pgrep chrome); do ionice -c3 -p $pid; renice 19 -p $pid; done'This service file defines a new service that runs a bash command to find all Google Chrome processes (using
pgrep chrome), and then appliesioniceandreniceto each process.
Save and close the file.
Next, create a new timer file:
sudo nano /etc/systemd/system/renice-chrome.timer
- In the timer file, add the following lines:
[Unit] Description=Run renice-chrome.service every hour [Timer] OnCalendar=hourly Persistent=true [Install] WantedBy=timers.targetThis timer file defines a new timer that runs the
renice-chrome.serviceevery hour.
Save and close the file.
Now, enable and start the timer:
sudo systemctl enable renice-chrome.timer sudo systemctl start renice-chrome.timer
- You can check the status of the timer with:
sudo systemctl list-timersThis will show you a list of all active timers, including when they're next due to run.
Please note that this will only affect Google Chrome processes that are already running when the timer runs. If you want to also lower the priority of new Google Chrome processes as they're launched, you would need to use a different approach, such as the script I described in my previous answer.
Also, remember to replace
chromewith the correct process name if it's different on your system. You can check the process name withpgrep chrome. If it doesn't return anything, trypgrep chromiumorpgrep google-chrome.
Anyway, just before announcing ssh-1.0 in July 1995, I sent this e-mail to IANA:
From ylo Mon Jul 10 11:45:48 +0300 1995 From: Tatu Ylonen ylo@cs.hut.fi
To: Internet Assigned Numbers Authority iana@isi.edu
Subject: request for port number
Organization: Helsinki University of Technology, Finland
Dear Sir, I have written a program to securely log from one machine into another over an
insecure network. It provides major improvements in security and functionality over existing
telnet and rlogin protocols and implementations. In particular, it prevents IP, DNS and
outing spoofing.  My plan is to distribute the software freely on the Internet and to get it
into as wide use as possible. I would like to get a registered privileged port number for
the software.
The number should preferably be in the range 1-255 so that it can be used in the WKS field
in name servers. I'll enclose the draft RFC for the protocol below. The software has been in
local use for several months, and is ready for publication except for the port number. If
the port number assignment can be arranged in time, I'd like to publish the software already
this week. I am currently using port number 22 in the beta test.
It would be great if this number could be used (it is currently shown as Unassigned in the
lists). The service name for the software is "ssh" (for Secure Shell).
Yours sincerely,  Tatu Ylonen ylo@cs.hut.fi  ... followed by protocol specification
for ssh-1.0
The next day, I had an e-mail from Joyce waiting in my mailbox:
Date: Mon, 10 Jul 1995 15:35:33 -0700 From: jkrey@ISI.EDU To: ylo@cs.hut.fi Subject:
Re: request for port number  Cc: iana@ISI.EDU
Tatu,  We have assigned port number 22 to ssh, with you as the point of contact.  Joyce
There we were! SSH port was 22!!!
Displaylink c'est vraiment de la daube
git clone git@github.com:DisplayLink/evdi.git
sudo mv evdi /usr/src/evdi
cd /usr/src
sudo mv evdi-1.12.0 evdi-1.12.0-backup
sudo ln -s evdi/module evdi-1.12.0
cd evdi-1.12.0
sudo make
sudo make installrestic.sh#!/bin/bash
timeout 30 mount /home/backup/ && time autorestic --config /home/changeme/.autorestic.yml backup -a ; umount /home/backuprestic.service[Unit]
Description="Restic script"
[Service]
ExecStart=/bin/bash -c "/home/changeme/restic.sh"restic.timer[Unit]
Description="Run restic.service 5min after boot and every 24 hours relative to activation time"
[Timer]
OnBootSec=5min
OnUnitActiveSec=24h
OnCalendar=Mon..Fri *-*-* 10:00:*
Unit=restic.service
[Install]
WantedBy=multi-user.target/etc/fstabstorage.ip.address:/home/changeme/backup /home/backup nfs4  noauto,rw,hard,intr,rsize=8192,wsize=8192,timeo=14 0 0~/.autorestic.ymlversion: 2
locations:
  home:
    from: /home/changeme/
    to: remote
    options:
          backup:
            exclude:
              - 'Dropbox/*'
              - 'Dropbox*/*'
              - '.cache'
              - '.*CacheStorage.*'
              - '.*CachedData.*'
              - '.*Cache.*'
              - '.var/app'
              - '.cargo'
              - '.ssh'
              - '.rustup'
            exclude-file: /home/changeme/.gitignore
backends:
  remote:
    type: local
    path: "/home/backup/"
    key: "changeme"Learn how to use MySQL SQL commands with this comprehensive tutorial. Discover four subgroups of MySQL SQL commands: DDL for database schema, DML for data manipulation, DCL for access control, and TCL for transaction management.
chiant à retrouver à chaque fois pour activer le fractional scaling
Enfin un graph de conversion AQI/microgrammes https://qph.fs.quoracdn.net/main-qimg-fc087d00bff5bb9da5477ac5530c5c3e
je veux bien qu'on souhaite parler de l'impact sur la santé mais c'est chiant de convertir des pommes en oranges en permanence quand on parle de mesures
via Victor, à surveiller!
linux kernel map via Noiz
LIDA is a tool to automatically explore data, generate visualizations and infographics from data using large language models like ChatGPT and GPT4
via Salim, much interechting
Temptations of an open-source browser extension developer #670
I got some help from support. The key piece was to run "chattr -R -i ~/Dropbox" which revealed the problem files. I had created symbolic links from my dotfile settings in .config, .themes, and .local and placed the actual files in dropbox folders so that some of my settings would get backed up to dropbox. One of the applications put unchangeable files in dropbox and that is what caused the problems. Since I reinstalled fresh, once I was able to find the problem files via chattr and move them out of Dropbox, syncing resumed and is now fixed.
does the trick
chattr -R -i ~/Dropbox 2> /dev/stdout|rg -io '/home/CHANGEME/Dropbox.*'via Nico
Andrew Kelley, creator of Zig, picks up where Mike Acton left off to teach us practical ways to apply data-oriented design
References:
- CppCon 2014: Mike Acton "Data-Oriented Design and C++": youtube.com/watch?v=rX0ItVEVjHc
- Handmade Seattle: handmade-seattle.com/
- Richard Fabian, 'Data-Oriented Design': dataorienteddesign.com/dodbook/
- IT Hare, 'Infographics: Operation Costs in CPU Clock Cycles': ithare.com/infographics-operation-costs-in-cpu-clock-cycles/
- The Brain Dump, 'Handles are the better pointers': floooh.github.io/2018/06/17/handles-vs-pointers.htmlvia Noiz