En analysant nos fichiers les uns après les autres, le linter Flutter va appliquer un set de règles conséquent, et surtout, pour chacune de ces règles, nous fournir l’explication associée à la règle, un exemple de bonne et de mauvaise utilisation, et parfois une correction automatique.
Pour expliciter chaque composant d’une règle de lint nous allons regarder plus précisément l’exemple de la règle “use_decorated_box” :
Cette règle comporte :
– un code unique “use_decorated_box” : c’est son identifiant, on peut l’utiliser notamment si on souhaite ignorer cette règle en écrivant :
// ignore: use_decorated_box
– une description : un texte clair et concis expliquant rapidement en quoi consiste cette règle. Ce texte sera affiché à l’utilisateur quand il passera son curseur au-dessus d’un bout de code ne respectant pas cette règle.
– une documentation : cette partie ne sera pas directement visible dans notre IDE. C’est la documentation de l’analyser disponible sur internet qui nous permet de comprendre comment appliquer la règle ainsi que le pourquoi de son existence.
Généralement, on y trouve des exemples de code respectant la règle et des exemples ne la respectant pas.
– une correction automatique (facultative) : certaines règles ont en bonus la possibilité d’être corrigées automatiquement. En tant que développeur, il ne nous reste plus qu’à faire ALT+Entrée et Android Studio nous propose le fix.
Par la suite, quand nous essaierons de créer nos propres règles, nous essaierons de garder la même structure, avec code, description, documentation et quick-fix.
Disclaimer : les développeurs du plugin ont écrit un article (en anglais) le décrivant et dans lequel vous pourrez retrouver la majorité du contenu dont je vais vous parler, avec des exemples différents mais qui reprennent le même fonctionnement. Nous allons donc passer certains éléments assez rapidement.
Nous allons également prendre comme exemple des règles que nous avons implémentées sur notre projet “ens” et il y aura donc un certain nombre de variables suffixées avec “ens”.
Avant de rentrer plus en détails dans le “comment” utiliser custom_lint, quelques explications sur le “pourquoi” de son utilisation.
Le Linter de base de Flutter est déjà un très bon outil, mais ses règles sont “génériques” dans le sens où elles peuvent théoriquement s’appliquer à n’importe quel projet et elles ne couvrent pas des standards décidés par l’équipe de développement voire par l’équipe produit.
Elles s’appliquent également seulement au framework Flutter de base et donc pas sur des librairies externes que votre équipe ait pu rajouter. L’exemple fourni par la team de custom_lint (invertase) porte d’ailleurs sur une règle applicable aux librairies provider ou riverpod.
Custom_lint, puisque c’est le sujet de cette deuxième partie, est un plugin que l’on peut rajouter à l’analyser de base de Flutter. Pour le décrire de manière rapide, il nous permet de nous relier à un package flutter dans lequel on va pouvoir écrire des classes permettant de déclencher des warnings et/ou des erreurs quand l’analyser va parcourir chacun de nos fichiers.
Comme vu plus haut, custom_lint va nous demander de tirer une dépendance vers un autre package Dart dans notre projet. Et en dehors de l’import de la lib en elle-même, ce sera le seul import à rajouter, les deux évidement dans les dev_dependencies. On n’a donc, comme on peut le prévoir, pas d’import à rajouter dans notre appplication “en production”.
Puis dans ce package Dart à créer, il faut ensuite déclarer la liste des règles à ajouter à notre plugin. Règles que l’on écrira par la suite. Nous allons également, dans ce package, tirer les dépendances au plugin qui nous fourniront les outils pour implémenter les différentes règles. Ce qui donne :
pubspec.yaml
dev_dependencies:
custom_lint: ^0.5.3
ens_custom_lint_rules:
path: ./ens_custom_lint_rules/
ens_custom_lint_rules/pubspec.yaml
dependencies:
analyzer: ^5.11.0
analyzer_plugin: ^0.11.2
custom_lint: ^0.5.3
custom_lint_builder: ^0.5.3
custom_lint_core: ^0.5.3
ens_custom_lint_rules/lib/src/ens_custom_lint_rules.dart
library;
export 'src/ens_custom_lint_rules_base.dart';
ens_custom_lint_rules/lib/src/ens_custom_lint_rules_base.dart
PluginBase createPlugin() => _EnsLintPlugin();
class _EnsLintPlugin extends PluginBase {
@override
List
Et voilà ! La configuration est faite, il ne reste plus qu’à implémenter notre première règle, nommée arbitrairement _DontUseSingleChildScrollView, qui, comme son nom l’indique, indiquera un warning au développeur s’il essaie d’utiliser le Widget Flutter SingleChildScrollView (Nous avons mis cette règle en place sur le projet pour nous forcer à utiliser un Widget custom ayant les mêmes propriétés mais intégrant une scrollbar).
Comme votre IDE vous l’indique, si vous avez écrit le code vu plus haut, il va nous falloir implémenter une classe étendant DartLintRule.
class _DontUseSingleChildScrollView extends DartLintRule {
@override
void run(
CustomLintResolver resolver,
ErrorReporter reporter,
CustomLintContext context,
) {
// TODO
}
@override
List
Ici nous avons rajouté dans le constructeur de notre classe le code de notre warning ainsi que le message de description qui sera affiché au développeur s’il passe sa souris dessus. C’est aussi à cet endroit là que nous pouvons choisir la sévérité, mais également un URL permettant de rediriger vers la documentation de cette règle.
C’est aussi, malheureusement la dernière étape simple de notre parcours ! Courage !
Pour cette partie, nous allons regarder la méthode run de la classe du dessus. Cette méthode sera appelée à chaque fois qu’un fichier sera analysé. Elle nous met également à disposition trois paramètres :
– le resolver : il recueille toutes les informations dont on peut avoir besoin sur le fichier en cours. Il peut par exemple nous retourner le chemin (“path”) du fichier. Sur le projet nous l’avons par exemple utilisé pour implémenter une règle relevant une erreur si le nom du fichier en cours d’analyse ne finit pas par “_test.dart”.
void run(
CustomLintResolver resolver,
ErrorReporter reporter,
CustomLintContext context,
) {
var path = resolver.path;
if (path.contains('/test/') && !(path.endsWith('_test.dart'))) {
// TODO remonter une erreur
}
}
– le reporter : comme son nom l’indique, il va nous servir à remonter les erreurs. Il va nous mettre à disposition plusieurs méthodes permettant de reporter une erreur en la liant à différents éléments. Le choix de la méthode utilisée est relativement important dans le sens où il va définir le code souligné par l’erreur. Il permet également de pouvoir rendre dynamique le code d’erreur si l’on veut par exemple afficher des informations spécifiques à l’erreur en cours. Ci dessous nous avons implémenté une erreur remontant les paramètres oubliés dans la méthode “props” d’une classe étendant Equatable :
if (nomManquants.isNotEmpty) {
reporter.reportErrorForElement(
LintCode(
name: 'add_all_props_in_equatable',
problemMessage: 'Il manque les params ${nomManquants.join(', ')} dans les props',
correctionMessage: 'Ajouter toutes les variables dans les props',
),
element);
}
– le context : le paramètre selon moi le plus complexe des trois. Il contient les informations relatives à l’analyse en cours. Nous allons par la suite étudier seulement un aspect de ce paramètre : le registry. Ce registry permet de pouvoir enregistrer des callback lorsque l’analyser rencontre certains événements en parcourant le fichier, comme la déclaration d’une classe, d’une variable ou d’une méthode. Nous allons regarder plus précisément en dessous l’exemple de la règle citée tout en haut qui signale l’utilisation du Widget SingleChildScrollView :
@override
void run(
CustomLintResolver resolver,
ErrorReporter reporter,
CustomLintContext context,
) {
context.registry.addConstructorName((node) {
var className = node.staticElement?.enclosingElement.name;
var classDisplayName = node.staticElement?.enclosingElement.displayName;
if ((className == 'SingleChildScrollView' || classDisplayName == 'SingleChildScrollView') &&
node.staticElement != null) {
reporter.reportErrorForNode(code, node);
}
});
}
À ce moment-là, si vous êtes comme moi, et que vous n’avez pas l’habitude de parcourir les entrailles du langage Dart, tout commence à se compliquer, et on sort de l’expérience de dev habituelle. Récapitulons un peu ce qui se passe dans la méthode ci- dessus.
context.registry.addConstructorName((node) {
...
});
Ici nous allons enregistrer une callback appelée lorsque le constructeur d’une classe sera appelée. En l’occurrence, le constructeur appelé qui nous intéresse sera celui de SingleChildScrollView. On voit également que dans cette callback nous allons avoir accès à un paramètre node de type ConstructorName. Si vous vous lancez dans l’aventure d’écrire votre propre règle par la suite, je vous conseille de prendre l’habitude de naviguer dans le code de ces différents objets, dans lequel vous trouverez une description bienvenue de ce qu’ils représentent et à quoi ils ressemblent.
Toujours là ? Continuons.
context.registry.addConstructorName((node) {
var classDisplayName = node.staticElement?.enclosingElement.displayName;
if (classDisplayName == 'SingleChildScrollView') {
reporter.reportErrorForNode(code, node);
}
});
A l’aide de notre paramètre node, nous allons pouvoir creuser quelles sont les informations contenues dans notre appel de constructeur. En l’occurrence, son type statique, qui est en fait la classe associée à ce constructeur. Puis à l’intérieur de ce type statique, le nom d’affichage de ce type, qui dans le cas concernant notre règle serait ‘SingleChildScrollView’. Dans ce cas, comme vu précédemment, il nous suffira de reporter une erreur et le tour est joué !
Le code de cette règle n’est pas parfait, puisqu’il ne checke que le nom du type, et il remontera donc une erreur également si vous utilisez une classe appelée SingleChildScrollView, même si elle n’est pas celle venant de material. Je vous laisse libre de décider si cela serait un faux positif ou un vrai warning.
Votre warning de lint étant maintenant prêt à être affiché au développeur, il vous reste encore la possibilité de l’aider dans sa correction en lui suggérant un quick-fix tel que vu dans la première partie de cet article. Nous resterons sur l’exemple du dessus et de notre bannissement de la SingleChildScrollView pour comprendre comment rajouter un quick-fix. Si nous revenons dans notre classe _DontUseSingleChildScrollView nous pouvons y trouver une méthode que nous avons laissée de côté : getFixes. Dans le cas ou nous n’avons pas de quick-fix à proposer à notre développeur, nous pouvons simplement renvoyer une liste vide, ici nous allons faire créer une nouvelle classe et la retourner :
@override
List
Nous allons découvrir un nouveau type de classe : DartFix. Comme avec la classe DartLintRule nous allons nous occuper de la méthode run, qui reprend les mêmes paramètres et en ajoute deux nouveaux :
– analysisError : qui contient les informations liées à l’erreur (ou le warning) remontées par la règle écrite dans la classe précédente.
– others : contenant les autres erreurs du même type remontées dans le même fichier.
Nous allons notamment nous servir ici de l’analysisError pour trouver où appliquer notre correctif :
@override
void run(
CustomLintResolver resolver,
ChangeReporter reporter,
CustomLintContext context,
AnalysisError analysisError,
List<AnalysisError> others,
) {
final changeBuilder = reporter.createChangeBuilder(
message: 'Remplacer par ScrollviewWithScrollbar',
priority: 1,
);
changeBuilder.addDartFileEdit((builder) {
builder.addSimpleReplacement(
SourceRange(analysisError.offset, analysisError.length),
'ScrollviewWithScrollbar',
);
builder.importLibraryElement(Uri.parse('package:my_project/ui/widgets/scrollview_with_scrollbar.dart'));
});
}
Nous réutilisons ici le reporter vu précédemment, à qui on va rajouter cette fois un changeBuilder au lieu d’une erreur, et ensuite attacher à ce change builder deux changements :
builder.addSimpleReplacement(
SourceRange(analysisError.offset, analysisError.length),
'ScrollviewWithScrollbar',
);
On commence tout d’abord par remplacer le constructeur de SingleChildScrollView, qui doit normalement correspondre aux positions délimitées par l’analysisError, par la classe que l’on veut utiliser soit ScrollviewWithScrollbar.
builder.importLibraryElement(Uri.parse('package:fr_cnamts_ens/ui/widgets/scrollview_with_scrollbar.dart');
Puis, pour pouvoir l’utiliser, on rajoute un import vers cette librairie. À savoir que si cet import existe déjà, il ne sera pas fait en double. Et si il n’existe pas, il sera ajouté, au bon endroit, trié de manière alphabétique.
Et voilà ! On a fait le tour de tout ce qu’il y a à écrire pour avoir nos nouveaux sets de règles prêts à analyser notre projet !
En théorie, vous avez maintenant tout ce qu’il vous faut pour vous lancer dans l’écriture de votre première règle. En pratique, naviguer dans les différents objets qui vous seront proposés notamment dans les nodes qui apparaîtront quand l’analyser parcourera vos fichiers est loin d’être instinctif au début, et il est souvent très pratique de pouvoir jeter un œil aux données à votre disposition au moment de l’analyse du fichier.
Heureusement, il est possible de rapidement et facilement attacher un débugueur à votre process de lint, d’y poser des points d’arrêts, etc… bref tout ce que vous avez déjà l’habitude de trouver dans votre débugueur flutter classique !
Pour ça, il suffit de lancer la commande suivante :
dart run custom_lint --watch
Après que votre projet ait compilé, vous verrez apparaître des informations similaires à
The Dart VM service is listening on http://127.0.0.1:62818/Z0UAb7PkQ5U=/
The Dart DevTools debugger and profiler is available at: http://127.0.0.1:62818/Z0UAb7PkQ5U=/devtools?uri=ws://127.0.0.1:62818/Z0UAb7PkQ5U=/ws
Si vous cliquez sur le deuxième lien, vous serez redirigé sur une console, avec plusieurs onglets (que je n’ai pas encore eu le temps de tous creuser, désolé), dont celui du débugueur. Une fois dedans, vous pouvez naviguer jusqu’au fichier où vous voulez poser un breakpoint.
Vous aurez également accès à une console, avec même un peu d’autocomplétion, pour y tester toutes les opérations que vous voulez. Personnellement c’est comme ça que j’ai réussi à avancer, petit pas par petit pas, pour comprendre les différents objets à manipuler, et enfin à réussir à écrire mes différentes règles, puis à les améliorer.
L’article Un IDE Flutter sur mesure avec custom_lint est apparu en premier sur OCTO Talks !.
]]>The folks behind Linux Mint have been on a roll, pushing out major releases such as the Linux Mint 21.2 release, and LMDE 6.
And the flurry of releases doesn't seem to be stopping, as we now have another release from them in the form of 'Linux Mint 21.2 Edge'.
Allow me to tell you more about it.
Powered by Linux kernel 6.2 release, Linux Mint 21.2 Edge is tailored for users who want to run newer hardware not acting nice with Linux kernel 5.15 LTS, which is what the regular 21.2 release of Linux Mint features.
Definitely a good Linux distro option for Intel Arc graphics users now!
If you are curious about the "Edge" ISO, here's what the documentation says:
In addition to its regular ISO images, Linux Mint sometimes provides an “edge” ISO image for its latest release. This image ships with newer components to be able to support the most modern hardware chipsets and devices.
Additionally, this ISO also brings back support for secure boot. This should come in as a useful add-on for those who want it.
Not to forget, the Linux Mint 21.2 Edge release is solely being offered in the 'Cinnamon' desktop flavor, without any XFCE or MATE flavors.
Though, you won't find any differences when using the Edge variant compared to the regular one, apart from the newer hardware support.
Want to give it a try?
Head over to the official website to grab the ISO from one of the many available download mirrors.
Do note that this release is only available as a 64-bit release, with no 32-bit options.
💬 Will you be trying out the edge ISO? Let us know below!
]]>The post Motel One Discloses Ransomware Attack Impacting Customer Data appeared first on SecurityWeek.
]]>Rather than MongoDB's AI-powered SQL converter, natural language queries, or ML visualization releases, it's the document database company's strategy for vertical markets that is catching one analyst's eye.…
]]>Cette revue de presse sur Internet fait partie du travail de veille mené par l’April dans le cadre de son action de défense et de promotion du logiciel libre. Les positions exposées dans les articles sont celles de leurs auteurs et ne rejoignent pas forcément celles de l’April.
✍ Thierry Noisette, le samedi 30 septembre 2023.
En bref : des exemples de bonnes pratiques RSE. Richard Stallman a un cancer. Des militants, à Framasoft et LFI, racontent leur engagement.
Et aussi:
✍ Thierry Noisette, le vendredi 29 septembre 2023.
Dans la revue des «Annales des Mines», qui consacre un numéro à la souveraineté numérique, Jean-Paul Smets, entrepreneur et libriste, dresse un réquisitoire contre le système français du «cloud de confiance» et pointe d’autres discriminations contre les solutions libres.
✍ Maxime champigneux, le jeudi 28 septembre 2023.
Découvrez le développement logiciel : phases clés, outils, conseils pour apprendre et contribuer. Une aventure d’innovation et de croissance.
✍ Nils Hollenstein, le lundi 25 septembre 2023.
Souvent bénévoles, les développeuses et développeurs de logiciels libres contribuent largement au monde numérique actuel. Deux libristes trentenaires témoignent d’un secteur en recomposition face au poids écrasants des géants du numérique, de Google et Microsoft à Twitter et Facebook.
Commentaires : voir le flux Atom ouvrir dans le navigateur
]]>L’article Coup d’envoi du vaste chantier des Etats généraux de l’information est apparu en premier sur FRENCHWEB.FR.
]]>The post Zip Slip Vulnerability Let Attacker Import Malicious Code and Execute Arbitrary Code appeared first on GBHackers - Latest Cyber Security News | Hacker News.
]]>Nous avons mis en place un exporteur de métriques, dont le projet (bien sûr open-source) est disponible sur GitHub (https://github.com/xperimental/nextcloud-exporter). On va le connecter à l'API de Nextcloud, et l'exporteur va remonter des informations de façon périodique, grâce à Prometheus, qui seront ensuite exploitées par Grafana.
Les valeurs remontées par l'exporteur sont les suivantes, et on ne peut qu'espérer qu'elles seront plus nombreuses au fur et à mesure des mises à jour :
name | description |
---|---|
nextcloud_active_users_daily_total | Number of active users in the last 24 hours |
nextcloud_active_users_hourly_total | Number of active users in the last hour |
nextcloud_active_users_total | Number of active users for the last five minutes |
nextcloud_apps_installed_total | Number of currently installed apps |
nextcloud_apps_updates_available_total | Number of apps that have available updates |
nextcloud_database_info | Contains meta information about the database as labels. Value is always 1. |
nextcloud_database_size_bytes | Size of database in bytes as reported from engine |
nextcloud_exporter_info | Contains meta information of the exporter. Value is always 1. |
nextcloud_files_total | Number of files served by the instance |
nextcloud_free_space_bytes | Free disk space in data directory in bytes |
nextcloud_php_info | Contains meta information about PHP as labels. Value is always 1. |
nextcloud_php_memory_limit_bytes | Configured PHP memory limit in bytes |
nextcloud_php_upload_max_size_bytes | Configured maximum upload size in bytes |
nextcloud_scrape_errors_total | Counts the number of scrape errors by this collector |
nextcloud_shares_federated_total | Number of federated shares by direction sent / received |
nextcloud_shares_total | Number of shares by type: authlink : shared password protected links group : shared groups link : all shared links user : shared users mail : shared by mail room : shared with room |
nextcloud_system_info | Contains meta information about Nextcloud as labels. Value is always 1. |
nextcloud_up | Indicates if the metrics could be scraped by the exporter: 1 : successful 0 : unsuccessful (server down, server/endpoint not reachable, invalid credentials, ...) |
nextcloud_users_total | Number of users of the instance |
Supposons que vous avez déjà une architecture Nextcloud, et Grafana (avec Prometheus installé) opérationnelle. Nous allons installer l'exporteur en conteneur Docker.
Vous pouvez l'installer sur n'importe quelle machine, le seul prérequis est qu'il faut qu'elle puisse effectuer des requêtes sur le serveur Nextcloud, et qu'elle puisse être atteinte par Prometheus.
Créons un répertoire nextcloud-exporter
:
mkdir nextcloud-exporter
Puis utilisons le docker-compose.yml
suivant :
---
version: "3.3"
services:
nextcloud-exporter:
container_name: nextcloud-exporter
image: "xperimental/nextcloud-exporter"
security_opt:
- no-new-privileges:true
command: nextcloud-exporter -c /conf/config.yml
ports:
- 127.0.0.1:9205:9205
cap_add:
- MKNOD
volumes:
- /var/nextcloud-exporter/nextcloud_exporter_conf:/conf:ro
restart: always
volumes:
nextcloud_exporter_conf:
driver: local
Notons également que le conteneur sera exposé en local sur le port 9205. Nous configurerons un Vhost Apache pour qu'il soit accessible (à travers une restriction IP et un .htpasswd
) depuis le serveur de supervision.
À l'intérieur de ce répertoire, créer nextcloud_exporter_conf
, qui abritera le fichier de configuration config.yml
:
# required
server: "https://URL_NEXTCLOUD"
# required for token authentication
authToken: "A_Generer"
# optional
listenAddress: ":9205"
timeout: "5s"
tlsSkipVerify: false
Il faudra également générer un token pour la valeur de authToken
. Ceci se fait depuis Nextcloud.
Lorsque tout est fait, lançons le conteneur :
docker-compose up -d
Le conteneur est opérationnel, libre à vous de faire la configuration du proxy.
Pour la suite de l'article, nous allons paramétrer la partie Prometheus. Le fichier de configuration est le suivant : /etc/prometheus/prometheus.yml
.
Ajoutons le job suivant (attention à l'intendation qui est indispensable ) :
- job_name: 'nextcloud-test'
metrics_path: '/'
scrape_interval: 5s
scheme: https
basic_auth:
username: "usernameDefini"
password: "passwordHtAccess"
static_configs:
- targets:
- 'URL_Nextcloud_Exporter'
Puis redémarrons Prometheus :
systemctl restart prometheus
Maintenant, libre à vous d'ajouter les panels de votre choix, suivant les valeurs remontées par l'exporter, et définis en introduction.
Ci-joint un exemple de dashboard :
L’article Comment superviser Nextcloud avec Grafana ? est apparu en premier sur Aukfood.
]]>Voici un logiciel qui va plaire aux gens qui sont à l’a fois amateurs d’échecs et amoureux de la ligne de commande. On connait tous ces moments où on est coincé au boulot, pas grand chose à faire, et on a très envie de se détendre avec une partie d’échecs sans quitter son terminal pour ne pas se faire griller par le collègue Thierry. Et bien, j’ai trouvé exactement ce qu’il vous faut : CLI-Chess !
CLI-Chess c’est tellement une expérience que vous allez vous prendre pour Anya Taylor-Joy au bout de 5 min. Vous pouvez jouer en ligne avec votre compte Lichess.org ou hors ligne contre l’IA baptisée Fairy-Stockfish.
Et pour l’installer, il vous suffit d’exécuter la commande suivante :
pip install cli-chess
Ce qui est génial avec CLI-Chess, c’est que vous pouvez personnaliser l’apparence de votre échiquier et de vos pièces en choisissant parmi les thèmes disponibles. De plus, CLI-Chess est compatible avec Linux, Windows et macOS, ce qui fait que vous pouvez profiter de cette expérience d’échecs, peu importe votre ordinateur.
Si vous êtes débutant aux échecs, ne vous inquiétez pas, CLI-Chess est très accessible et facile à prendre en main.
Pour lancer une partie en ligne avec votre compte Lichess.org, utilisez la commande :
cli-chess lichess PSEUDO MOTDEPASSE
Pour jouer hors ligne contre l’IA Fairy-Stockfish, utilisez simplement la commande :
cli-chess
En plus de personnaliser l’apparence de votre échiquier, vous pouvez également configurer les paramètres de l’IA, tels que la difficulté et le temps de réflexion. Cela vous permettra de l’adapter à votre niveau et de vous améliorer progressivement. CLI-Chess permet également de vous faire des replays de partie via Lichess TV ou encore de jouer à l’aveugle pour les plus énervés d’entre vous.
CLI-Chess c’est donc un excellent moyen de pratiquer votre jeu dans un environnement qui vous rassure � ou en tout discrétion.
Si ça vous branche, c’est à découvrir ici
]]>Pour optimiser vos efforts de référencement, il est judicieux de faire appel à un rédacteur professionnel. Vous pouvez trouver des rédacteurs web SEO compétents sur la plateforme Redacteur.com, qui propose des services de rédaction d’articles et de contenus de qualité.
Dans cet article, nous explorerons pourquoi et quand désavouer des liens et les méthodes pour identifier des backlinks toxiques. Nous découvrirons également comment désavouer ces liens via la Google Search Console et discuterons des risques potentiels liés à cette pratique.
Le désaveu de liens est une procédure qui consiste à demander à Google d’ignorer certains liens entrants pointant vers votre site web. Si cette démarche est utilisée, c’est principalement pour se débarrasser des backlinks toxiques ou indésirables qui pourraient nuire à votre stratégie de référencement et impacter négativement le classement de votre site dans les résultats de recherche.
Le désaveu de liens devient donc nécessaire lorsque vous identifiez des liens toxiques susceptibles de nuire à votre site. Cela peut se produire dans différentes situations :
Si vous recevez un avertissement de Google dans votre Google Search Console concernant des “liens non naturels”, cela signifie que vous êtes pénalisé pour des pratiques de liens manipulatrices. Dans ce cas, il est impératif de désavouer les liens toxiques pour corriger ces pratiques et restaurer votre positionnement.
Il est possible que des concurrents peu scrupuleux essaient de vous nuire en créant délibérément des backlinks de mauvaise qualité pointant vers votre site.
Si vous remarquez une soudaine augmentation de backlinks suspects et que votre trafic et vos classements chutent, vous pourriez être victime d’une attaque de Negative SEO. Désavouer les liens indésirables est alors une mesure à prendre pour contrer cette stratégie malveillante.
Même si votre site n’est pas pénalisé, il est essentiel de maintenir un profil de backlinks propre et de qualité pour améliorer votre référencement sur le long terme.
Le désaveu de liens vous permet de prendre le contrôle sur les liens pointant vers votre site et de supprimer ceux qui pourraient potentiellement nuire à votre réputation auprès des moteurs de recherche.
Lors de l’analyse de votre profil de backlinks à l’aide d’outils SEO tels que SEMrush ou Majestic, vous pouvez repérer des liens artificiels ou suspects qui ne respectent pas les directives de qualité de Google. Dans ce cas, désavouer ces liens est une action préventive pour maintenir la santé de votre site en matière de référencement.
Il faut souligner que le désaveu de liens est une mesure sérieuse et qu’il ne doit être entreprise qu’après évaluation minutieuse des liens à désavouer. Google considère le désaveu comme une option de dernier recours, et il est important de ne désavouer que les liens qui nuisent à votre classement de manière certaine.
Lorsque votre site web est associé à des backlinks qui proviennent de sources de mauvaise qualité, comme des sites de spam, des réseaux de liens artificiels ou des domaines pénalisés, cela peut entraîner des conséquences négatives sur votre propre référencement.
Google fait très attention à la qualité des liens pointant vers votre site. Dans certains cas, il peut interpréter ces liens de mauvaise qualité comme une tentative de manipulation de son algorithme.
Désavouer des liens auprès de Google est donc une action cruciale pour préserver la réputation de votre site web et maintenir un bon classement dans les résultats de recherche. Cette démarche vise à signaler à Google les liens indésirables, toxiques ou de mauvaise qualité pointant vers votre site afin qu’ils ne soient pas pris en compte dans l’évaluation de votre référencement.
En désavouant des liens indésirables, vous nettoyez votre profil de backlinks en éliminant les liens de mauvaise qualité ou considérés comme manipulatifs par Google. Ainsi, vous améliorez la réputation de votre site et évitez les éventuelles pénalités que pourrait provoquer la présence de liens suspects.
Si votre site a été pénalisé par Google en raison de pratiques de liens non conformes aux directives, le désaveu de liens peut être une étape cruciale pour rectifier la situation. En supprimant les liens problématiques, vous montrez à Google que vous prenez des mesures pour corriger vos erreurs et respecter les règles établies.
L’identification de backlinks toxiques est une étape cruciale pour maintenir la santé de votre profil de liens. Deux méthodes populaires peuvent d’ailleurs vous aider à repérer les liens indésirables : l’utilisation de Semrush et l’évaluation du Trust Flow de Majestic.
Semrush est un outil SEO polyvalent qui peut vous aider à détecter les backlinks potentiellement toxiques pointant vers votre site. Voici comment procéder :
Commencez par utiliser Semrush pour obtenir une liste complète de tous les liens pointant vers votre site.
Semrush attribue des métriques telles que l’Autorité des domaines et le Score de toxicité aux liens. Concentrez-vous sur les liens avec un score de toxicité élevé, car ce sont ceux qui pourraient nuire à votre référencement.
Assurez-vous que les sites qui vous font des liens sont pertinents par rapport à votre thématique. Les liens en provenance de sites non pertinents peuvent être considérés comme toxiques par les moteurs de recherche.
Recherchez les schémas de liens artificiels, tels que les liens provenant de réseaux de sites ou de pages créées uniquement dans le but de générer des liens.
Sur Majestic, vous retrouverez une métrique appelée Trust Flow, qui évalue la qualité et la pertinence des liens. En suivant certaines étapes, vous pouvez l’utiliser pour identifier les backlinks toxiques.
Après avoir obtenu le rapport complet de votre profil de backlinks sur Majestic, intéressez-vous à votre Trust Flow. Plus le Trust Flow d’un site référent est élevé, plus il est fiable. Concentrez-vous sur les liens provenant de sites avec un Trust Flow élevé pour assurer la qualité de vos backlinks.
Des liens provenant d’un large éventail de sites pertinents sont généralement bénéfiques. Les liens provenant d’un petit nombre de sites peuvent être considérés comme suspects.
La Google Search Console est un ensemble d’outils très pratiques offert par Google aux propriétaires de sites web. Il leur permet de comprendre comment leur site est indexé par le moteur de recherche et de résoudre d’éventuels problèmes d’indexation.
Pour désavouer des liens indésirables, voici comment procéder :
Connectez-vous à votre compte Google Search Console et sélectionnez le site web pour lequel vous souhaitez désavouer des liens.
En vous servant d’outils tels que Semrush ou Majestic, repérez les liens potentiellement toxiques pointant vers votre site. Notez les URL de ces liens.
Créez un fichier texte contenant les URL des liens que vous voulez désavouer. Chaque URL doit être sur une ligne distincte.
Accédez à l’outil de désaveu de liens dans la Google Search Console, sélectionnez votre site web et soumettez le fichier de désaveu que vous avez créé.
Google va ensuite examiner le fichier et prendra en compte vos demandes de désaveu.
Vous devez néanmoins savoir que le désaveu n’est pas une garantie absolue que les liens seront supprimés des classements, mais c’est une étape importante pour indiquer à Google ceux que vous souhaitez ignorer.
Désavouer des liens peut être une étape nécessaire à l’amélioration du référencement de votre site web, cependant, cela comporte également certains risques. Dans un premier temps, lorsque vous désavouez des liens, certains d’entre eux pourraient être des liens de qualité qui contribuent réellement à votre trafic et à votre classement dans les moteurs de recherche. En les désavouant, vous pourriez perdre cette source de trafic. Votre positionnement dans les résultats de recherche pourrait en être affecté.
De plus, les effets à long terme du désavouement restent inconnus. Google ne divulgue pas pleinement le fonctionnement de son algorithme de classement. Il est donc difficile de prévoir les conséquences à long terme du désaveu de certains liens.
Une fois que vous avez soumis un fichier de désaveu à Google, il peut aussi être difficile de revenir en arrière et de rétablir les liens désavoués. De plus, si votre fichier de désaveu est mal formaté ou ne contient pas les informations nécessaires, Google pourrait mal interpréter vos intentions, ce qui peut entraîner des problèmes supplémentaires.
Le désaveu de liens est une pratique essentielle pour maintenir un profil de backlinks sain et améliorer votre visibilité en ligne. Cependant, il est crucial de ne pas se précipiter dans cette démarche et de bien évaluer les liens à désavouer pour ne pas désavouer des liens qui seraient utiles au SEO.
Si vous avez le moindre doute, n’hésitez pas à faire appel à un rédacteur professionnel sur la plateforme de rédaction de Redacteur.com. Nos rédacteurs vous garantissent des contenus de qualité qui renforceront votre stratégie SEO et amélioreront la visibilité de votre site sur les moteurs de recherche.
L’article Comment désavouer des liens auprès de Google ? est apparu en premier sur Redacteur.com.
]]>\ I looked at a bunch and removed some guesswork on both fronts: These resources are simply the best, better than all the rest, and maybe more importantly, they give you reliable options whether you want your journey to be 60 minutes or 6 years.
\ I’ve included some details on prerequisites at the end. Regardless of background, everyone should check out the first link. 3Blue1Brown is a legend.
\
\
Books
Deep Learning, Ian Goodfellow, Yoshua Bengio, and Aaron Courville
Deep Learning with Python, François Chollet
Neural Networks and Deep Learning, Michael Nielsen
Machine Learning with PyTorch and Scikit-Learn, Sebastian Raschka
\
Courses
course.fast.ai, Jeremy Howard
deeplearning.ai, Andrew Ng & others
Elements of AI, University of Helsinki
Dive Into Deep Learning, Aston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola
Neural Networks: Zero to Hero, Andrej Karpathy
Learn Machine Learning in 3 Months, Siraj Raval
\
Roadmaps
Machine Learning Roadmap, Daniel Bourke
Complete Roadmap to be a Deep Learning Engineer, Let the Data Confess
\
Resource lists
Awesome Deep Learning, ChristosChristofidis
Crème de la crème of AI courses, SkalskiP
Deep Learning.md, brylevkirill
\
Miscellaneous
Papers With Code, Meta AI Research
Two Minute Papers (Youtube Channel), Károly Zsolnai-Fehér
Sentdex (Youtube Channel), Harrison Kinsley
It’s a free country. Learn what you want along the way. All the resources above are fairly clear about what’s needed.
\ That said, if you want to embark on a serious course of study, here are a few central building blocks they all share that you’ll need to be comfortable with.
\
\ It’ll also help to know:
\
\
\ Happy learning �
]]>Microplastic pollution has become a global concern due to its presence in various ecosystems. These particles (< 5mm in size) are generated from the degradation of plastic items and can be found in oceans, rivers, soil, and even in the bodies of animals and humans.
Despite extensive research on microplastics in terrestrial and aquatic environments, their presence in high-altitude clouds and their potential influence on cloud formation and climate change remained poorly understood.
Now, researchers from Japan led by Yize Wang and Hiroshi Okochi from Waseda University have found microplastics in cloud water samples collected from high-altitude mountain regions in Japan.
Their study identified the presence of microplastics in the cloud water, confirming that microplastics are indeed present in clouds at these altitudes.
Microplastics have become a menace. Scientists recently found nine types of microplastics in the human heart. Microplastics in terrestrial and aquatic environments have been well studied, but the research on airborne microplastics is limited.
Airborne microplastics can originate from various sources, such as landfills, clothing, and the ocean (via aerosolization).
Studies have shown that airborne microplastics can travel long distances and add to global pollution in the free troposphere, the lowest level in the Earth's atmosphere.
Also, airborne microplastics might play a role in cloud formation by acting as particles that attract water vapor and ice crystals, especially when transported in high-altitude air and the lower atmosphere.
Speaking of the necessity for this research, Dr. Okochi said in a press release, "Microplastics in the free troposphere are transported and contribute to global pollution."
"If the issue of plastic air pollution is not addressed proactively, climate change and ecological risks may become a reality, causing irreversible and serious environmental damage in the future."
To collect the cloud water samples for testing, the researchers focused on the high-altitude mountain summits in Japan of Mount Oyama and Mount Fuji.
The researchers employed advanced imaging techniques, including attenuated total reflection imaging and micro-Fourier transform infrared spectroscopy (µFTIR ATR imaging), to determine the types of microplastics present, their size distribution, and physical and chemical properties.
Their experiments revealed the presence of nine different kinds of microplastics in the water samples, including polyethylene, polypropylene, polyethylene terephthalate, and polyurethane, which are commonly used in everyday applications.
Interestingly, they found these microplastics to be fragmented, with their concentrations ranging from 6.7 to 13.9 pieces per liter of cloud water!
They also noticed the presence of hydrophilic microplastics with carbonyl and hydroxyl groups, suggesting that these particles could actively participate in cloud formation by serving as cloud condensation nuclei.
Explaining how their research can help with global warming efforts, Okochi said, "Airborne microplastics are degraded much faster in the upper atmosphere than on the ground due to strong ultraviolet radiation, and this degradation releases greenhouse gases and contributes to global warming."
"As a result, the findings of this study can be used to account for the effects of airborne microplastics in future global warming projections."
The presence of various microplastics in cloud water raises concerns about their potential impact on climate, ecosystems, and human health. In sensitive ecosystems such as the polar regions, the accumulation of airborne microplastics can profoundly disrupt the Earth's ecological equilibrium, leading to a significant decline in biodiversity.
The findings of their study are published in Environmental Chemistry Letters.
]]>WaterBear is a free platform bringing together inspiration and action with award-winning high-production environmental documentaries covering various topics, from animals and climate change to people and communities. The WaterBear team produces their own original films and documentaries and hosts curated films and content from various high-profile partners, including award-winning filmmakers, large brands, and significant non-governmental organizations (NGOs), like Greenpeace, WWF, The Jane Goodall Institute, Ellen MacArthur Foundation, Nikon, and many others.
For context, I am currently working at a software development company called Q Agency based in Zagreb, Croatia. We collaborated with WaterBear and its partner companies to build a revamped and redesigned version of WaterBear’s web and mobile app from the ground up using modern front-end technologies.
In the first article, I briefly discussed the technical stack that includes a React-based front-end framework, Next.js for the web app, Sanity CMS, Firebase Auth, and Firestore database. Definitely read up on the strategy and reasoning behind this stack in the first article if you missed it.
Now, let’s dive into the technical features and best practices that my team adopted in the process of building the WaterBear web app. I plan on sharing specifically what I learned from performance and accessibility practices as a first-time lead developer of a team, as well as what I wish I had known before we started.
Image OptimizationImages are pieces of content in many contexts, and they are a very important and prominent part of the WaterBear app’s experience, from video posters and category banners to partner logos and campaign image assets.
I think that if you are reading this article, you likely know the tightrope walk between striking, immersive imagery and performant user experiences we do as front-enders. Some of you may have even grimaced at the heavy use of images in that last screenshot. My team measured the impact, noting that on the first load, this video category page serves up as many as 14 images. Digging a little deeper, we saw those images account for approximately 85% of the total page size.
That’s not insignificant and demands attention. WaterBear’s product is visual in nature, so it’s understandable that images are going to play a large role in its web app experience. Even so, 85% of the experience feels heavy-handed.
So, my team knew early on that we would be leveraging as many image optimization techniques as we could that would help improve how quickly the page loads. If you want to know everything there is to optimize images, I wholeheartedly recommend Addy Osami’s Image Optimization for a treasure trove of insightful advice, tips, and best practices that helped us improve WaterBear’s performance.
Here is how we tackled the challenge.
As I mentioned a little earlier, our stack includes Sanity’s CMS. It offers a robust content delivery network (CDN) out of the box, which serves two purposes: (1) optimizing image assets and (2) caching them. Members of the WaterBear team are able to upload unoptimized high-quality image assets to Sanity, which ports them to the CDN, and from there, we instruct the CDN to run appropriate optimizations on those images — things like compressing the files to their smallest size without impacting the visual experience, then caching them so that a user doesn’t have to download the image all over again on subsequent views.
Requesting the optimized version of the images in Sanity boils down to adding query variables to image links like this:
https://cdn.sanity.io/.../image.jpg?w=1280&q=70&auto=format
Let’s break down the query variables:
w
sets the width of the image. In the example above, we have set the width to 1280px
in the query.q
sets the compression quality of the image. We landed on 70% to balance the need for visual quality with the need for optimized file sizes.format
sets the image format, which is set to auto
, allowing Sanity to determine the best type of image format to use based on the user’s browser capabilities.Notice how all of that comes from a URL that is mapped to the CDN to fetch a JPG file. It’s pretty magical how a completely unoptimized image file can be transformed into a fully optimized version that serves as a completely different file with the use of a few parameters.
In many cases, the format
will be returned as a WebP file. We made sure to use WebP because it yields significant savings in terms of file size. Remember that unoptimized 1.2 MB image from earlier? It’s a mere 146 KB after the optimizations.
And all 14 image requests are smaller than that one unoptimized image!
The fact that images still account for 85% of the page weight is a testament to just how heavy of a page we are talking about.
Another thing we have to consider when talking about modern image formats is browser support. Although WebP is widely supported and has been a staple for some time now, my team decided to provide an optimized fallback JPG just in case. And again, Sanity automatically detects the user’s browser capabilities. This way, we serve the WebP version only if Sanity knows the browser supports it and only provide the optimized fallback file if WebP support isn’t there. It’s great that we don’t have to make that decision ourselves!
Have you heard of AVIF? It’s another modern image format that promises potential savings even greater than WebP. If I’m being honest, I would have preferred to use it in this project, but Sanity unfortunately does not support it, at least at the time of this article. There’s a long-running ticket to add support, and I’m holding hope we get it.
Would we have gone a different route had we known about the lack of AVIF support earlier? Cloudinary supports it, for example. I don’t think so. Sanity’s tightly coupled CDN integration is too great of a developer benefit, and as I said, I’m hopeful Sanity will give us that support in the future. But that is certainly the sort of consideration I wish I would have had early on, and now I have that in my back pocket for future projects.
LCP is the biggest element on the page that a user sees on the initial load. You want to optimize it because it’s the first impression a user has with the page. It ought to load as soon as possible while everything under it can wait a moment.
For us, images are most definitely part of the LCP. By giving more consideration to the banner images we load at the top of the page, we can serve that component a little faster for a better experience. There are a couple of modern image attributes that can help here: loading
and fetchpriority
.
We used an eager
loading strategy paired with a high fetchpriority
on the images. This provides the browser with a couple of hints that this image is super important and that we want it early in the loading process.
<!-- Above-the-fold Large Contentful Paint image -->
<img
loading="eager"
fetchpriority="high"
alt="..."
src="..."
width="1280"
height="720"
class="..."
/>
We also made use of preloading in the document <head>
, indicating to the browser that we want to preload
images during page load, again, with high
priority, using Next.js image preload options.
<head>
<link
rel="preload"
as="image"
href="..."
fetchpriority="high"
/>
</head>
Images that are “below the fold” can be de-prioritized and downloaded only when the user actually needs it. Lazy loading is a common technique that instructs the browser to load particular images once they enter the viewport. It’s only fairly recently that it’s become a feature baked directly into HTML with the loading
attribute:
<!-- Below-the-fold, low-priority image -->
<img
decoding="async"
loading="lazy"
src="..."
alt="..."
width="250"
height="350"
/>
This cocktail of strategies made a noticeable difference in how quickly the page loads. On those image-heavy video category pages alone, it helped us reduce the image download size and number of image requests by almost 80% on the first load! Even though the page will grow in size as the user scrolls, that weight is only added if it passes through the browser viewport.
srcset
My team is incredibly happy with how much performance savings we’ve made so far. But there’s no need to stop there! Every millisecond counts when it comes to page load, and we are still planning additional work to optimize images even further.
The task we’re currently planning will implement the srcset
attribute on images. This is not a “new” technique by any means, but it is certainly a component of modern performance practices. It’s also a key component in responsive design, as it instructs browsers to use certain versions of an image at different viewport widths.
We’ve held off on this work only because, for us, the other strategies represented the lowest-hanging fruit with the most impact. Looking at an image element that uses srcset
in the HTML shows it’s not the easiest thing to read. Using it requires a certain level of art direction because the dimensions of an image at one screen size may be completely different than those at another screen size. In other words, there are additional considerations that come with this strategy.
Here’s how we’re planning to approach it. We want to avoid loading high-resolution images on small screens like phones and tablets. With the srcset
attribute, we can specify separate image sources depending on the device’s screen width. With the sizes
attribute, we can instruct the browser which image to load depending on the media query.
In the end, our image markup should look something like this:
<img
width="1280"
height="720"
srcset="
https://cdn.sanity.io/.../image.jpg?w=568&... 568w,
https://cdn.sanity.io/.../image.jpg?w=768&... 768w,
https://cdn.sanity.io/.../image.jpg?w=1280&... 1280w
"
sizes="(min-width: 1024px) 1280px, 100vw"
src="https://cdn.sanity.io/.../image.jpg?w=1280&..."
/>
In this example, we specify a set of three images:
568px
,768px
,1280px
.Inside the sizes
attribute, we’re telling the browser to use the largest version of the image if the screen width is above 1024px
wide. Otherwise, it should default to selecting an appropriate image out of the three available versions based on the full device viewport width (100vw
) — and will do so without downloading the other versions. Providing different image files to the right devices ought to help enhance our performance a bit more than it already is.
The majority of content on WaterBear comes from Sanity, the CMS behind the web app. This includes video categories, video archives, video pages, the partners’ page, and campaign landing pages, among others. Users will constantly navigate between these pages, frequently returning to the same category or landing page.
This provided my team with an opportunity to introduce query caching and avoid repeating the same request to the CMS and, as a result, optimize our page performance even more. We used TanStack Query (formerly known as react-query
) for both fetching data and query caching.
const { isLoading, error, data } = useQuery( /* Options */ )
TanStack Query caches each request according to the query key we assign to it. The query key in TanStack Query is an array, where the first element is a query name and the second element is an object containing all values the query depends on, e.g., pagination, filters, query variables, and so on.
Let’s say we are fetching a list of videos depending on the video category page URL slug. We can filter those results by video duration. The query key might look something like this basic example:
const { isLoading, error, data } = useQuery(
{
queryKey: [
'video-category-list',
{ slug: categorySlug, filterBy: activeFilter }
],
queryFn: () => /* ... */
}
)
These query keys might look confusing at first, but they’re similar to the dependency arrays for React’s useEffect
hook. Instead of running a function when something in the dependency array changes, it runs a query with new parameters and returns a new state. TanStack Query comes with its dedicated DevTools package. It displays all sorts of useful information about the query that helps debug and optimize them without hassle.
Let’s see the query caching in action. In the following video, notice how data loads instantly on repeated page views and repeated filter changes. Compare that to the first load, where there is a slight delay and a loading state before data is shown.
We’re probably not even covering all of our bases! It’s so tough to tell without ample user testing. It’s a conflicting situation where you want to do everything you can while realistically completing the project with the resources you have and proceed with intention.
We made sure to include a label on interactive elements like buttons, especially ones where the icon is the only content. For that case, we added visually hidden text while allowing it to be read by assistive devices. We also made sure to hide the SVG icon from the assistive devices as SVG doesn’t add any additional context for assistive devices.
<!-- Icon button markup with descriptive text for assistive devices -->
<button type="button" class="...">
<svg aria-hidden="true" xmlns="..." width="22" height="22" fill="none">...</svg
><span class="visually-hidden">Open filters</span>
</button>
.visually-hidden {
position: absolute;
width: 1px;
height: 1px;
overflow: hidden;
white-space: nowrap;
clip: rect(0 0 0 0);
-webkit-clip-path: inset(50%);
clip-path: inset(50%);
}
Supporting keyboard navigation was one of our accessibility priorities, and we had no trouble with it. We made sure to use proper HTML markup and avoid potential pitfalls like adding a click event to meaningless div
elements, which is unfortunately so easy to do in React.
We did, however, hit an obstacle with modals as users were able to move focus outside the modal component and continue interacting with the main page while the modal was in its open state, which isn’t possible with the default pointer and touch interaction. For that, we implemented focus traps using the focus-trap-react library to keep the focus on modals while they’re opened, then restore focus back to an active element once the modal is closed.
Dynamic SitemapsSitemaps tell search engines which pages to crawl. This is faster than just letting the crawler discover internal links on its own while crawling the pages.
The importance of sitemaps in the case of WaterBear is that the team regularly publishes new content — content we want to be indexed for crawlers as soon as possible by adding those new links to the top of the sitemap. We don’t want to rebuild and redeploy the project every time new content has been added to Sanity, so dynamic server-side sitemaps were our logical choice.
We used the next-sitemap plugin for Next.js, which has allowed us to easily configure the sitemap generation process for both static and dynamic pages. We used the plugin alongside custom Sanity queries that fetch the latest content from the CMS and quickly generate a fresh sitemap for each request. That way, we made sure that the latest videos get indexed as soon as possible.
Let’s say the WaterBear team publishes a page for a video named My Name is Salt. That gets added to a freshly generated XML sitemap:
Now, it’s indexed for search engines to scoop up and use in search results:
Until Next Time…In this article, I shared some insights about WaterBear’s tech stack and some performance optimization techniques we applied while building it.
Images are used very prominently on many page types on WaterBear, so we used CDN with caching, loading strategies, preloading, and the WebP format to optimize image loading performance. We relied on Sanity for the majority of content management, and we expected repeating page views and queries on a single session, prompting us to implement query caching with TanStack Query.
We made sure to improve basic accessibility on the fly by styling focus states, enabling full keyboard navigation, assigning labels to icon buttons, providing alt text for images, and using focus traps on modal elements.
Finally, we covered how my team handled dynamic server-side rendered sitemaps using the next-sitemap plugin for Next.js.
Again, this was my first big project as lead developer of a team. There’s so much that comes with the territory. Not only are there internal processes and communication hurdles to establish a collaborative team environment, but there’s the technical side of things, too, that requires balancing priorities and making tough decisions. I hope my learning journey gives you something valuable to consider in your own work. I know that my team isn’t the only one with these sorts of challenges, and sharing the lessons I learned from this particular experience probably resonates with some of you reading this.
Please be sure to check out the full work we did on WaterBear. It’s available on the web, Android, and iOS. And, if you end up watching a documentary while you’re at it, let me know if it inspired you to take action on a cause!
Many thanks to WaterBear and Q Agency for helping out with this two-part article series and making it possible. I really would not have done this without their support. I would also like to commend everyone who worked on the project for their outstanding work! You have taught me so much so far, and I am grateful for it.
]]>Pour réaliser une affiche sur mesure qui saura captiver votre audience, n’hésitez pas à faire appel à un graphiste freelance sur Graphiste.com, la plateforme de référence pour la mise en relation entre graphistes et porteurs de projets.
Dans cet article, nous vous présentons les différents formats d’affiches publicitaires et vous donnons des conseils pour concevoir des visuels attractifs.
Afin de passer à l’impression de vos affiches publicitaires, vous devrez forcément à un moment choisir le format d’impression de vos affiches papier. Les divers formats d’impression existants possèdent tous une mise en application différente, rendant certains formats plus propices à certaines situations.
Le format 4 par 3 est un incontournable dans le domaine des affiches publicitaires. Ce format d’impression dispose de dimensions généreuses, de 4 mètres de largeur sur 3 mètres de hauteur.
Le format 4 par 3 offre une visibilité hors pair aux passants et automobilistes le long des routes et autoroutes. Sa taille imposante permet de communiquer un message clair et impactant, attirant instantanément l’attention du public cible.
Idéal pour les campagnes extérieures à grande échelle, ce format d’impression offre un espace créatif généreux pour exprimer votre identité de marque et mettre en avant vos produits ou services de manière percutante. Il est également parfait pour annoncer des événements, des promotions spéciales ou des lancements de produits.
Le format A0 est un choix stratégique pour les campagnes publicitaires demandant une présence remarquable. Avec ses dimensions de 84,1 cm de largeur sur 118,9 cm de hauteur, cette affiche imposante attire les regards sur les lieux fréquentés.
Le format A0 est souvent utilisé dans les halls d’exposition, les grandes salles ou les événements du fait de sa surface généreuse permettant de présenter votre message de manière percutante.
L’avantage majeur de ce format d’impression réside dans sa capacité à captiver un public nombreux. Sa visibilité est amplifiée lorsqu’il est placé dans des zones à forte affluence, permettant ainsi d’atteindre un large éventail de prospects. Le format A0 peut contenir des visuels accrocheurs, des informations détaillées et des éléments graphiques saisissants grâce à sa taille imposante.
Plus petite que l’affiche au format A0, l’affiche A1 reste une affiche de grand format. Avec ses dimensions de 59,4 cm de largeur sur 84,1 cm de hauteur, cette affiche offre un équilibre parfait entre taille et visibilité.
Le format A1 est un choix judicieux pour les campagnes publicitaires souhaitant concilier impact visuel et praticité. Polyvalent, il est idéal pour les campagnes en extérieur comme en intérieur.
En extérieur, le format A1 capte l’attention des passants dans des zones à forte fréquentation. Sa taille suffisamment grande permet de diffuser un message clair et percutant. Dans les espaces intérieurs, ce format d’impression s’intègre harmonieusement dans les lieux de passage, les commerces ou les halls d’accueil.
Son côté pratique facilite aussi la gestion des campagnes. L’affiche A1 est facile à installer et peut être déplacée aisément selon vos besoins. Pour des messages ciblés ou des annonces plus ponctuelles, le format A1 convient aussi parfaitement.
Le format A2, grâce à ses dimensions de 42 cm de largeur sur 59,4 cm de hauteur, offre une visibilité optimale tout en bénéficiant de dimensions réduites.
Le format A2 est aussi un choix astucieux pour les campagnes publicitaires qui dont l’objectif est d’allier praticité et impact visuel. Les affiches au format A2 sont principalement utilisées en entreprise, que ce soit pour un affichage extérieur comme intérieur. Les affiches papier au format A2 sont aussi adaptées aux particuliers puisqu’elles sont idéales pour un affichage intérieur.
Avec ses dimensions de 29,7 cm de largeur sur 42 cm de hauteur, le format A3 offre une surface idéale pour transmettre un message concis et percutant.
Choisir le format A3 pour vos affiches publicitaires est idéal pour les campagnes visant une communication ciblée et efficace. Grâce à sa taille compacte, ce format d’impression est particulièrement adapté pour les campagnes à petite échelle et en intérieur. Vous pouvez notamment retrouver des affiches adoptant ce format d’impression dans les commerces, les salles d’attente ou les vitrines.
Le format A3 est très pratique car il est facile à manipuler, à afficher et à distribuer. Il vous permet de réaliser des campagnes ciblées à moindre coût, tout en touchant efficacement votre public. Avec le format A3, vous privilégiez donc une communication plus personnalisée tout en gérant votre message de manière plus flexible.
Comme son nom l’indique, le format 50 x 200 cm dispose d’une taille de 50 cm de largeur sur 200 cm de hauteur et est destinée aux campagnes publicitaires nécessitant une communication verticale percutante.
Les affiches au format 50 x 200 cm offrent une parfaite visibilité dans les espaces en hauteur, tels que les couloirs du métro ou les façades de bâtiments.
Grâce à sa taille allongée, vos campagnes publicitaires captiveront l’attention des passants lors de leurs déplacements, ce qui offre une communication efficace dans des zones à forte fréquentation. Il est donc particulièrement adapté aux campagnes urbaines, touchant un public varié et mobile.
Le format Abribus est le seul format d’impression sur cette liste aux dimensions variables. Bien que sa taille soit généralement de 120 cm de largeur sur 175 cm de hauteur, ce format d’impression doit être spécifiquement adapté afin de s’intégrer à l’affichage urbain des arrêts de bus, tram et métro.
Utilisé au sein des abris pour bus, trams et métros, le format Abribus se distingue par son emplacement privilégié, attirant l’attention des usagers lors de leur attente. C’est un moyen efficace de toucher un large éventail de personnes d’horizons différents.
Les voyageurs utilisant régulièrement les transports en commun seront aussi exposés plusieurs fois à votre annonce, renforçant ainsi la mémorisation de votre marque ou de votre produit.
Le choix du format d’affiche publicitaire est une des décisions les plus importantes pour votre campagne de communication. Pour garantir une communication efficace, il est essentiel de prendre en compte certains facteurs clés pour choisir le format d’impression de vos affiches.
Avant de choisir un format d’impression pour vos affiches, clarifiez le message que vous souhaitez transmettre. Si votre message est simple et percutant, optez pour un format d’impression plus grand, comme le 4 par 3 offrant une visibilité maximale. En revanche, pour une communication plus détaillée, préférez des plus petits formats d’impression.
Le lieu où sera exposée votre affiche influence aussi le choix du format d’impression de vos affiches. Pour une campagne en extérieur le long des routes ou autoroutes, préférez alors de grands formats d’impression, comme le 4 par 3 et le A0. En intérieur, les formats A1, A2 ou A3 conviennent mieux pour s’adapter à l’espace qui est disponible.
Comprendre son audience est essentiel pour choisir un format d’affichage adéquat. Si vous visez les usagers des transports en commun, favorisez alors le format abribus. Pour une communication plus ciblée, les formats A3 et A2 seront optimaux.
La durée pendant laquelle votre affiche est exposée influence aussi le choix de son format d’impression. Pour des campagnes éphémères, privilégiez de grands formats, qui attirent rapidement l’attention. Pour des expositions prolongées, vous pouvez favoriser les formats abribus.
Concevoir le visuel d’une affiche publicitaire est une étape cruciale pour assurer l’efficacité de votre campagne de communication. Pour capter l’attention de votre audience et transmettre votre message de manière percutante, voici quelques astuces essentielles à suivre :
Pour des affiches percutantes et professionnelles, vous pouvez toujours faire appel à un graphiste freelance sur Graphiste.com. Les graphistes de notre plateforme sont des experts en communication visuelle et sauront donner vie à vos idées et créer un visuel personnalisé qui répondra à vos objectifs marketing. N’hésitez donc pas à confier votre projet à un graphiste talentueux sur Graphiste.com pour une communication visuelle percutante et mémorable.
Le choix du format d’affiche publicitaire est un élément déterminant pour le succès de votre campagne de communication. Chaque format d’impression offre des avantages spécifiques en fonction de l’environnement et des objectifs de votre campagne. Chaque choix est donc stratégique et impactant à sa manière.
Pour créer des affiches publicitaires percutantes et professionnelles, n’hésitez pas à faire appel à un graphiste sur Graphiste.com. Nos graphistes sont spécialisés dans la communication visuelle et sauront mettre en valeur vos projets, vos produits ou vos services avec créativité et originalité. Faites le choix de l’excellence pour vos affiches publicitaires et marquez les esprits de votre audience dès aujourd’hui.
L’article Quel format d’affiche publicitaire choisir pour vos campagnes ? est apparu en premier sur Graphiste.com.
]]>Associée à la référence CVE-2023-5217, cette faille de sécurité importante hérite d'un score CVSS v3.1 de 8.8 sur 10. Signalée par Clément Lecigne de l'équipe Google Threat Analysis Group, cette faille de sécurité de type "heap buffer overflow" se situe dans une fonction d'encodage de la bibliothèque de codecs vidéo libvpx.
D'après Google, cette vulnérabilité est impliquée au sein de cyberattaque et un exploit est déjà disponible : "Google sait qu'il existe un programme d'exploitation pour CVE-2023-5217 dans la nature", peut-on lire dans le bulletin de sécurité officiel. D'ailleurs, Maddie Stone de l'équipe Google Threat Analysis Group affirme que cette vulnérabilité a été utilisée pour installer un malware sur des machines. Comme à son habitude, Google ne donne pas de détails techniques pour laisser le temps à ses utilisateurs d'installer le correctif.
Ces dernières heures, l'agence américaine CISA a ajouté la vulnérabilité CVE-2023-5217 à son catalogue des failles de sécurité connues et exploitées dans le cadre d'attaques.
Au-delà de corriger cette faille de sécurité zero-day, cette mise à jour intègre "10 correctifs de sécurité" d'après Google, notamment des patchs pour ces deux autres vulnérabilités : CVE-2023-5186 et CVE-2023-5187, toutes les deux de type "use after free" et situées respectivement dans le Gestionnaire de mots de passe et le Gestionnaire d'extensions de Chrome.
Récemment, les développeurs ont corrigé une autre faille zero-day dans Google Chrome, mais elle était liée à la bibliothèque libwebp également utilisée par d'autres projets, y compris les navigateurs Mozilla Firefox et Microsoft Edge, mais aussi Signal et 1Password.
Si vous utilisez Google Chrome, il est plus que recommandé d'installer la dernière version : 117.0.5938.132. Cette version est disponible pour Windows, macOS et Linux.
The post Google Chrome : encore une faille zero-day exploitée dans des attaques (CVE-2023-5217) first appeared on IT-Connect.
]]>Google has added a 'Plus' designation to its Chromebook spec that requires machines to offer at least an Intel Core i3 12th Gen or above, or AMD Ryzen 3 7000, plus 8GB of memory and 128GB of onboard storage.…
]]>Le jeudi soir, le traditionnel apéro communautaire va désormais laisser sa place à une soirée communautaire. Notre public est invité à profiter de sa soirée sur les bars et restos du Disney Village. Pour information, l'équipe AFUP tiendra sûrement ses quartiers du côté du Billy Bob Saloon ! Profitez de l'ambiance festive de Disneyland Paris avec la communauté PHP.
Le jeu lancé lors de l'apéro communautaire du Forum PHP 2022 nous a encouragé à pousser l'idée un peu plus loin. Afin de préserver l'esprit communautaire de la soirée, sur une zone bien plus large que ce que nous avons pu connaître les années précédentes, un jeu permettra de conserver l'âme fédératrice propre à notre traditionnelle soirée du jeudi.
Le jeu sera lancé en fin de journée le jeudi, le temps de la soirée. Sans vous dévoiler le concept pour l'instant, sachez qu'il vous faudra retrouver vos équipiers et équipières, sans savoir qui ils sont ! De quoi encourager les rencontres et lancer quelques discussions. Pensez à garder votre badge avec vous et à le porter de manière visible le jeudi soir.
Si le jeu vous plaît, sachez que l'équipe travaille à le rendre opensource, afin que tout le monde puisse le reproduire auprès de ses équipes ou de ses événements.
Bien sûr, s'il ya une tradition à laquelle on ne peut pas déroger, c'est celle du ticket boisson : un système équivalent sera mis en place afin que vous puissiez profiter des bars et restos de la zone et l'AFUP participera ainsi à vous faire plaisir.
]]>L’article Accès des mineurs aux sites pornos: les enjeux du projet de loi est apparu en premier sur FRENCHWEB.FR.
]]>The post Chalk: Open-source software security and infrastructure visibility tool appeared first on Help Net Security.
]]>Voilà 15 ans que DuckDuckGo défie Google sur le terrain de la recherche en ligne, mais avec un argument de poids : pas question d’exploiter vos données personnelles. Récit. Certains l’ont adopté. D’autres simplement testé. Il se peut aussi que ce nom n’évoque rien pour vous. DuckDuckGo est un moteur de recherche (comme Google et Bing) qui existe depuis 15 ans aujourd’hui. Fondé par Gabriel Weinberg en 2008, il a connu sa première accélération en 2011. Son siège social est […]
L’article 15 ans de DuckDuckGo : les choses à savoir est apparu en premier sur Goodtech Info.
]]>Combodo fait évoluer sa plateforme open source ITMS et annonce la disponibilité immédiate de la version 3.1. C’est le point culminant d’un chantier de transformation lancé en 2020. Combodo iTop est une application (SaaS ou On Premise) destinée à encadrer la complexité des infrastructures partagées. La solution couvre l’ensemble des environnements clients tout en protégeant la confidentialité nécessaire à chaque organisation. La nouvelle version d’iTop 3.1, annoncée cette semaine, constitue la fin d’un grand chantier de modernisation démarré en avril 2020 après […]
L’article Ce qui change avec iTop 3.1, l’ITMS open source est apparu en premier sur Goodtech Info.
]]>On August 29th of this year, Xiaoyuan Gao from Not Your Type Foundry posted a statement on her Instagram addressing the unauthorized use of her font files by a font distribution website. The website in question is Fontesk.com, a platform where users can access and download numerous “high-quality” “free fonts for commercial use” and “open-source fonts.”
In what ways did Fontesk overstep their boundaries in utilising the fonts that Gao had generously uploaded online for free?
To understand this, we first need to grasp how “free fonts” work.
The fonts obtained from Fontesk differ from those you can find on dafont.com. Why is that? For individuals seeking fonts for business and advertising purposes, there are complications with many of the free fonts available for download on Dafont. Sometimes, these so-called “free” fonts are merely samples, offering limited characters, and excluding punctuations and other symbols.
Font providers typically require payment to access the full character set, and additional charges may apply if you intend to use them in a commercial context, necessitating a license for such usage.
This is precisely why Fontesk proudly promotes its free fonts as both commercial-free and open-source. Users are not required to purchase costly licenses to employ them in their business endeavours. In the case of open-source fonts, designers can modify these fonts and share them freely with others, allowing for versatile use. This is a level of freedom that many fonts on Dafont and other paid font sites do not allow. It is this significant flexibility in font usage that makes downloading fonts from Fontesk an appealing and cost-effective choice.
The majority of these free fonts fall under the Open Font License* (OFL), which was created by SIL International. This license allows fonts to be used, modified, and distributed freely, as long as the resulting fonts remain under the Open Font License. The only restriction under this license is that users cannot use the same font name if they wish to share their edited version of the font online.
*It’s important to note that type designers also use other font licenses, such as the Apache and Creative Commons licenses. Users who download fonts should carefully read and identify the specific licenses attached to them to avoid any violations of rights when using these fonts.
The creation of the Open Font License had a significant purpose from the outset. SIL International, a non-profit organization, had several objectives*, one of which was to help document and preserve languages that might be in danger of becoming obsolete while promoting literacy.
*It’s worth mentioning that SIL International is affiliated with evangelical Christians and has a mission to increase Bible literacy in support of their missionary activities. Consequently, certain countries, especially those with indigenous communities, have banned SIL International from their territories (p. 182).
Whether the aim is to enhance accessibility to minority languages or simply an act of generosity, many typographers have chosen to use the OFL for their fonts, making them available online for others to freely use in their projects. Given this context, one might wonder why Xiaoyuan Gao is upset with the way Fontesk is handling her font files.
Gao intends to release her fonts through the Velvetyne type foundry in the near future. Therefore, when Fontesk published her fonts on their site before the official announcement, they were still a work in progress and were not yet meant for public release.
These files, which are still under development, are stored on a cloud-based service called GitHub. Many open-source software developers, including typographers, use GitHub or GitLab to store, track, and collaborate on various software projects, such as fonts. This sort of collaboration was true of Velvetyne and Gao’s font project.
During our email interview, she further explained, “Putting OFL projects on GitHub doesn’t mean anyone can just take it for granted. Fontesk never contacted me about what they are planning to do with my files.
There is a “READ.ME” file in my Github repository — which clearly mentions my font will be published at Velvetyne Type Foundry, and normal people will at least check if the font is released or not before doing anything with it.”
While the fonts themselves are open-source and can be redistributed, Fontesk ignored Gao’s specified publishing conditions and proceeded to release her work prematurely on their platform without her knowledge.
While Fontesk has made efforts to provide attribution to the type designers on their website, three other aspects of their font distribution practices raise questions:
The in-progress font files that Gao and other users have on GitHub/GitLab are stored publicly. Fontesk takes advantage of this by searching for fonts in these public repositories, extracting them, and reuploading them to their website.
With this approach, can Fontesk still claim that their curated fonts are of “high quality” when they are essentially using people’s unfinished work for publication?
By offering these fonts for download and regular use online, Fontesk not only puts their reputation at risk but also jeopardises the reputation of the type designers. Users may encounter certain bugs or unfinished character sets and express their dissatisfaction. They may attribute the subpar font quality to poor production, when, in reality, the typographer has not yet completed the font project.
The OFL prohibits users from obtaining fonts and selling them for profit. However, Fontesk incorporates ads on their web pages. Consequently, whenever individuals visit their website to download fonts and encounter these ads, Fontesk receives compensation from the advertising agency based on web page viewership.
Needless to say, Gao wasn’t impressed. She stated, “Fontesk just steals people’s work because they can… I think they are very aware of their dirty business; there are tons [of ads on their website] so that they can make money out of people’s work.”
By earning from people’s work (including mine!) through ads, Fontesk’s website is built on the backs of countless hardworking typographers who were exploited without their knowledge. Gao adds, “As a type designer and owner of a type foundry, taking and publishing people’s open-source type design projects without asking permission from the creators is so messed up and way too disrespectful. I would never do that because respecting people’s hard work is the bare minimum.”
Upon discovering Fontesk’s actions and requesting the removal of her font files, Gao was met with a resolute refusal, and they even went so far as to block her IP address from accessing their website.
Although they eventually removed her fonts (the specific font download page is now inaccessible), their handling of the situation has been nothing short of discourteous. Not only did they release unfinished fonts without the designer’s consent, but they also chose to be less than transparent with Gao and displayed disrespect by attempting to keep her fonts available on their site.
This further tarnishes their reputation as a font distributor. One can only wonder how many fonts they may have clandestinely acquired without the knowledge or permission of the typographers, all to discreetly profit from them.
In the broader context, Fontesk is just one among several questionable font websites engaging in more serious infractions. For instance, there’s FontKe, a font distribution site based in China, which permits users to download only one free font from their server before requiring them to pay for memberships and virtual currency to access additional fonts, including those intended to be open-source or non-redistributable from their original source.
To compound the issue, FontKe does not take the initiative to provide proper attribution and licenses for each of these fonts, something that any reputable font retailer or distributor would have done and displayed upfront. Instead, they expect users to independently seek out individual font licenses, and they claim innocence if users inadvertently violate font usage rights.
Furthermore, they have a quotation form for users to inquire about the pricing of font licenses, even for fonts that should already be attributed to the OFL. I attempted to request a quote for one of their free, open-source fonts, but I received no response from them, casting doubt even on this aspect of their service.
Recognizing that the OFL is written in a way that can be easily exploited, type designers can first opt to create their own written agreements, often referred to as End User License Agreements (EULAs) that users must adhere to.* This can help prevent the misuse of their fonts. The agreement may include restrictions on redistributing the fonts on other websites, even if modifications have been made. Designers can also use this opportunity to incorporate clauses in their agreements to provide additional safeguards against unethical practices, such as hate speech.
*To learn more about what designers can and should do regarding EULAs, you can find additional information in my other essay here.
Of course, no amount of agreements can completely prevent thieves from attempting to illegally profit from the fonts of hardworking typographers. Fortunately, with a strong online presence, Gao was able to utilise her network and social media to expose Fontesk’s unethical practices and safeguard her work.
Many artists have also turned to social media to shed light on companies misusing their artwork for profit. Artists on Twitter discovered that bots were taking people’s artwork from the platform and selling it on T-shirts. To combat this, users created copyright-infringing art to lure these bots into extracting the images, intending to sell them as printed T-shirts on their websites. This tactic worked, prompting distributors to promptly remove these listings to avoid legal repercussions.
Unfortunately, the battle against such thefts remains an ongoing challenge. In cases like these, it’s crucial for artists of all backgrounds, whether established or emerging, to continue supporting one another by exposing and taking action against unscrupulous websites that seek to profit from illicit means.
We can also take steps to ensure the fonts we use in our projects are sourced responsibly. It’s a good practice to verify whether the fonts we intend to use are obtained from reputable websites and if their licenses align with our specific use cases before incorporating them.
Whether you’re in search of free/open-source fonts or even paid ones, it’s always advisable to seek out the original websites that host these fonts. This is crucial because legitimate font distributors provide the proper licenses and usage conditions, which illegal distributors may attempt to omit or modify in their files.
The founders of an open-source type foundry, Death of Typography, have thoughtfully compiled a list of reputable font distribution websites offering high-quality fonts, including some that are open-source. You can access the list here. Many thanks to Yen for this valuable curation!
If you have any other reputable free font websites to recommend, please do share them in the comments. Additionally, consider showing your support by expressing gratitude to the type foundries and designers through their social media channels. After all, they are creating and sharing these fonts generously from the heart!
As I mentioned in my review of the Free Font Index, open-source fonts play a crucial role in preserving the written works of linguistic communities that are often overlooked. They also encourage aspiring typographers to learn by reverse-engineering these fonts, making high-quality typemaking accessible to everyone.
However, these positive contributions can only continue if there is sufficient funding to archive and maintain these fonts online, as the people behind them also need to earn a living. Whenever possible, consider paying for fonts to support the field of typography. Paid font licenses are priced the way they are because it can take days, or even years, for type foundries to create complete character sets and font families, making them ideal for use in your documents and design projects.
This commitment to quality can also be observed in how professional type foundries* present font specimens and allow users to test their fonts directly on their websites. To enhance the user experience during font testing, professional websites often choose not to include ads to avoid distractions or quietly earn from your visit before you decide to make a purchase.
*There is a difference between certain type foundries and online font retailers in how they operate and pay type designers. As a rule of thumb, it is always better to support independent type foundries than commercial moguls (like Monotype, who owns MyFonts).
If you would like more information regarding this, please feel free to let me know in the comments and I may write an article about it in the future.
We’re grateful for the increasing accessibility of high-quality, free fonts online. It’s thanks to the efforts of type designers that platforms like Canva and Google Docs can offer a wider selection of fonts.
However, this accessibility comes with the responsibility for all of us to use these fonts ethically and not take advantage of the kindness of others, as seen in Fontesk’s treatment of Gao’s work. Let’s support the proper use of free and open-source fonts by downloading them from official sources and giving professional type design the respect it deserves.
How a font website dishonestly earns money was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
]]>That’s a reason I love Dan’s recent answers.
What colors should go in my color palette?
Black, white, and one strong accent color.
Zero nuance. Just answers.
]]>It aims to solve the age-old frustration of running out of room when typing in a <textarea>
element. We can already set the number of rows in a <textarea>
directly in HTML:
<textarea rows="10">
Setting the initial size isn’t the issue, but what happens after the user has reached that threshold. It’s at that point where any text entered into the element that goes beyond that 10 rows starts cutting off text at the top.
We might go so far as to consider this to be a form of CSS data loss.
That’s what the proposal for a new form-sizing
property is all about, which the CSSWG approved back on May 10, 2023. The idea is that we can opt into textareas that are automatically sized by the content they contain:
/* Adjust sizing to content */
textarea { form-sizing: auto; }
/* Normal behavior */
textarea { form-sizing: normal; }
Chrome appears to be the only browser currently working on the new property, at least in Chrome Canary. There’s a quiet ticket in Firefox, and nada that I could find in WebKit. Does that mean I’m obligated to file it?
]]>There are over 17 million developers worldwide who use NPM packages, making it a lucrative target for cybercriminals.
This is a post from HackRead.com Read the original post: FortiGuard Labs Uncovers Series of Malicious NPM Packages Stealing Data
]]>Updated A dental healthcare advert featuring what looks like a younger Tom Hanks dressed in a black suit is fake and AI-generated, the Forrest Gump actor has warned.…
]]>In the following video, Andrej Karpathy, one of the well-known prominent figures in AI, gives a golden lesson on prompt engineering:
https://youtu.be/bZQun8Y4L2A?si=jg6Da4jT05Nn2VbE&embedable=true
\ We all use LLMs like ChatGPT, Claude, or Llama to generate human-like text and assist in a wide range of tasks, from answering questions to generating creative content.
\ However, to effectively use these models, it is crucial to understand the process of training them and how to prompt them to achieve the desired results.
\ In this post, I will give you various techniques to harness the full potential of large language models that I have learned from Andrej’s speech.
\
The training process of large language models like GPT involves several stages; 1- Pre-training 2- Supervised fine-tuning 3- Reward modeling 4- Reinforcement learning.
\ Pre-training is the initial stage where the model is trained on a vast amount of data, including web scrapes, and high-quality datasets like HuggingFace, Github, Wikipedia, books, and more. The data is preprocessed to convert it into a suitable format for training the neural network.
\ Pre-training, the model predicts the next token in a sequence. This process is repeated for numerous tokens, enabling the model to learn the underlying patterns and structures of the language. The resulting model has billions of parameters (1T GPT-4, 176B GPT-3, 130B Claude-2, 7B, 13B, and 70B Llama-2), making it a powerful tool for various tasks.
\ Supervised fine-tuning is the next stage, where the model is trained on specific datasets with labeled examples. Humans behind computers gather data in the form of prompts and ideal responses, creating a training set for the model. The model is trained to generate appropriate responses based on the given prompts. This fine-tuning process helps the model specialize in specific tasks.
\ Reward modeling and reinforcement learning are additional stages that can be applied to further improve the model's performance. In reward modeling, the model is trained to predict the quality of different completions for a given prompt. This allows the model to learn which completions are more desirable and helps in generating high-quality responses. Reinforcement learning involves training the model with respect to a reward model, refining its language generation capabilities.
\
Prompt engineering plays a crucial role in effectively utilizing large language models. Here are some techniques that can enhance the performance and control the output of these models:
\
==Task-Relevant Prompts:== When prompting the model, ensure that the prompts are task-relevant and include clear instructions. Think about how those humans behind the computer contractor would approach the task and provide prompts accordingly. Including relevant instructions helps guide the model's response.
\
==Retrieval-Augmented Generation:== Incorporate relevant context and information into the prompts. By retrieving and adding context from external sources, such as documents or databases, you can enhance the model's understanding and generate more accurate responses. This technique allows the model to leverage external knowledge effectively.
\
==Few-Shot Learning:== Provide a few examples of the desired output to guide the model's response. By showing the model a few examples of the expected output, you can help it understand the desired format and generate more accurate responses. This technique is particularly useful when dealing with specific formats or templates.
\
==System 2 Thinking:== System 2 thinking involves deliberate planning and reasoning. Break down complex tasks into smaller steps and prompt the model accordingly. This approach helps the model to reason step-by-step and generate more accurate and coherent responses.
\
==Constraint Prompting:== Use constraint prompting to enforce specific templates or formats in the model's output. By reducing the probabilities of specific tokens, you can guide the model to fill in the blanks according to the desired format. This technique ensures that the model adheres to specific constraints while generating responses.
\
==Fine-Tuning:== Fine-tuning the model can further enhance its performance for specific tasks. By training the model on task-specific datasets, you can specialize its language generation capabilities. However, fine-tuning requires careful consideration and expertise, as it involves complex data pipelines and may slow down the training process.
\ Finally, it is important to consider the limitations and potential biases of these models. They may generate false information, make reasoning errors, or be susceptible to various attacks. Therefore, it is advisable to use them with human oversight and treat them as sources of suggestions rather than completely autonomous systems.
\ Prompt engineering is a crucial aspect of effectively utilizing large language models.
Andrej's Microsoft Developer Conference video is a great source for understanding the overall sight of LLMs.
\
:::info Also published here.
:::
\
]]>It’s like the StumbleUpon of yore (a “bar” across the top and randomized <iframe>
d websites below) except all the websites it brings you to are people’s personal blogs.
It’s just a charming experience because you land on websites that you’d very likely never land on, but can be entirely interesting. It’s true.
Collection of thoughts!
Some people’s sites don’t take kindly to being iframed. Some will try to break out of it. Some will just refuse to load.
I suspect a lot of people don’t even know that about their own site, it’s something imposed by the host. If I had to guess why, it’s because there is this security concern called “clickjacking”. If a site is allowed to be iframed, technically, someone could position like hidden inputs directly over inputs on the site and it could look like you are entering information on the real website but are really giving data to a nefarious website. The only way around it is to prevent iframing at all.
The fact that the site you’re looking at is iframed means you can’t just copy the URL quick, in case you’re trying to share or bookmark it or whatever. It’s not impossible, you just click the URL and it pops you out, but then you’ve kinda left the flow.
It’s not just the sharing that has a “breaks the web” feeling, it’s the back and forward buttons too. If you leave a site without meaning to, you ain’t finding it again. It’s not in your browsing history, you can’t press “back” to get back there, and there is nothing saved to your Kagi account or anything, which I suspect is partially because this is an MVP thing and partially because I think Kagi is privacy focused and saving user data opens up a can of worms there.
All the awkwardness with iframes makes me think that this thing shouldn’t be iframing at all. My opinion is that it should be a browser extension, which takes you right to the sites directly. The browser extension could offer the same options/controls. I think anyway! Have you seen the sidePanel API? (sidebarAction on Firefox, unclear what Safari offers) I think that might be the ticket to get persistent UI while using it.
I know Tango was eyeing up side panels highly because their extension would benefit highly from it, but ultimately went with a popup window (probably for browser consistency?).
There is an “Appreciate” button and a “Leave a Note” button. I like the idea! I don’t totally understand how they work. I’ve used both, but:
I suspect when other people come across that same site, it’ll have a number listed if other people appreciated it, but again I’ve never seen that, nor a note left. If the notes are just sitting there, that also feels weird, because allowing people to leave anonymous messages on the internet is just something that never ends well.
Poking around a little, I see they did mention it in the intro blog post:
Here, you can “appreciate” a post or jot down a temporary public note about it. These notes will vanish in about a week as we cycle in new content – emphasizing the fleeting, imperfect nature of the small web.
Wow, a lot of personal sites are ugly. 😬😬😬
Not trying to be a jerk, it’s just surprising (and a little comforting in a weird way, like how you don’t want a bowling alley to be too nice). I also find it surprising that if a site looks decent (to me), there is a good chance it’s a pretty off-the-shelf decent WordPress theme.
]]>
Large and small corporations share a common fundamental interest in delivering secured software. Neglecting security exposes them to data breaches, financial losses, legal liability, and reputation damage.
Most IT departments have dedicated security teams to mitigate these risks. However, the approach to delivering secure software has evolved over the last decades through the DevOps movement.
DevOps emphasizes proactive security, collaboration, and automation. Instead of treating security as a separate phase, it integrates security from the beginning, catching vulnerabilities early and reducing the chance of issues reaching production. Moving security concerns from the end of the development lifecycle to its source defines a transformative process named “Shift left.”
Traditional IT departments rely on reactive security measures, addressing security issues primarily after they occur. They have developed skills and tools to identify production issues and perform emergency operations in production environments.
Security is seen as a separate phase, often applied as a “band-aid” solution rather than an integrated part of the development and operations processes. Security operations are considered as firemen and not as consolidation engineers. Also, this does not include developers in securitizing their codebase, supply chain, or infrastructure.
This structure strongly emphasizes protecting the public interfaces, while internal systems and applications may receive less attention. It results in uneven security guarantees in the components of information systems. Operations heavily monitor firewalls and gateways, but the code running on these platforms is audited only as part of larger-scale, end-to-end security audits.
These security assessments and audits are typically manual, periodic processes after the development and deployment phases. Vulnerability scans, penetration testing, and code reviews may be infrequent, leaving systems vulnerable to emerging threats.
DevOps is a cultural and organizational movement that promotes collaboration and communication between software development (Dev) and IT operations (Ops) teams. It breaks down traditional silos, fostering a culture of shared responsibility and mutual understanding between these traditionally separate groups.
The main objective of DevOps practices is to streamline the process of delivering high-quality software. Automation plays a significant role in that context, and CI/CD practices are at the heart of it. It involves automating tasks like code builds, testing, deployment, and infrastructure provisioning to reduce manual errors and accelerate delivery cycles.
DevOps promotes a feedback-driven approach to development and operations, emphasizing continuous communication and learning. It fosters early feedback by automating testing, integrating feedback loops throughout the software development process, and deploying code changes frequently.
Shift left for security practices in DevOps means that security testing occurs earlier in the development process, ideally during the coding and build stages. DevOps emphasizes collaboration and integration between development and operations teams, which also extends to security teams.
Originally, DevOps was designed to limit operational concerns for production issues like bugs or downtimes. Later, we explicitly incorporated security in the development and deployment pipeline with the rise of the DevSecOps movement.
In left-shifted security management, the DevOps process includes security teams. Thus, it ensures that security is considered from the beginning (the left) of the development lifecycle rather than being a separate process.
DevOps encourages using infrastructure as code (IaC) for infrastructure provisioning and management. It has the first significant benefit of avoiding manual operations in production environments. When using IaC, infrastructure components are provisioned and deployed automatically through a continuous deployment (CD) pipeline, bringing consistency.
Defining infrastructure and access controls as code allows for enforcing security standards on these configuration files. It is possible to assess compliance with these standards within a continuous integration (CI) pipeline before deployment.
The above applies to configuration as code in general, whether it be a Dockerfile or a CI configuration file. Most supply chain security tools work by assessing the content of a project configuration file and defining the various dependencies of the said project.
There is no such thing as zero risk. While limiting risks with continuous assessment, DevOps also promotes monitoring of applications and infrastructure in production. Developers remain involved as the DevOps approach for monitoring focuses on giving developers the best tooling to assess and answer production issues quickly. Time to recovery is a standard metric for determining a team’s performance in DevOps practices.
DevOps fosters small and frequent deployments. It also involves investing to be able to roll back changes easily. Thus, it becomes possible for developers to provide swift responses to security incidents, dramatically reducing the time to recovery of IT teams.
DevOps practices can streamline compliance and auditing processes. Automation ensures that security controls and compliance requirements are consistently applied, reducing manual efforts.
This automation takes place through security assessment tools in CI pipelines. Thanks to this, security teams can continuously monitor for security threats. Since this evaluation happens before the code reaches production environments, ensuring ongoing compliance with security policies and standards is much more convenient.
CI pipelines can enforce the most common security standards. Supply chain (dependencies), code, infrastructure, or even a live application in a test environment can all be assessed regarding common security standards directly from a CI pipeline before it reaches a deployed environment.
The first step for shifting left regarding security management is to act to foster a DevOps culture in how your organization delivers software. Conway’s law is an empirical law that we often refer to in IT management circles.
Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.
In the context of DevOps, this law has a single, fundamental consequence: We have to structure the IT department to foster communication between developers and operations.
The above can mean including an operation in each developer team and asking developers’ teams to control their whole deployment lifecycle. It will be the most efficient way to share security and operational knowledge across developers. However, you will stop communication between DevOps of different teams. Per the law above, there is a risk of creating discrepancies in how you operate software across the organization.
You can also ask developers teams to manage their operational aspects through security/monitoring tools provided by DevOps teams that they can use as a service. For instance, a use-case we can often relate to at Escape is an operations team providing access to developers teams on our SaaS platform while setting up Escape security scans within their CI/CD pipeline. Pre-deployment enforcement of security best practices remains controlled by operations teams. Still, developers are notified much earlier in the development lifecycle if their code fails to comply with these best practices.
A DevOps-compliant IT department features a lot of organizational as well as technological characteristics. These characteristics range from adding a large set of testing tools within a CI pipeline to purely managerial features such as security considerations in the product design phase and infrastructure management by deploying infrastructure as code.
Adopting every practice defined by DevOps guidelines at once will be, at best, challenging for managers and the teams involved. However, there is no need to shift everything at once. Becoming a DevOps organization one step at a time is possible and recommended.
The starting point will depend on your organization’s operational constraints. For instance, if your deployment process involves a lot of manual operations, and is known for being flaky, you might want to focus on implementing an Infrastructure as Code strategy. On the other hand, if your product features access control caveats, you will want to focus on security guidelines and checklists for your product teams.
Bringing developers closer to operational and security concerns is the key to a successful DevOps transition. Organizations must provide security training and awareness programs for development and operations teams to ensure they understand and can address security issues.
No limit exists to how left security can be shifted in the development process. Product teams can be involved in security concerns by formalizing security behaviors in their user stories. For instance, at Escape, we must consider the interactions of each feature we develop with our role-based access control (RBAC) model. Should we do a new role for accessing this feature? Does it end in the scope of the RBAC? We systematically address these concerns when designing our product.
Peer code reviews are also an excellent tool for enforcing security concerns to be addressed and sharing knowledge with developers. More than that, security engineers can gain a deeper understanding of the constraints developers meet regarding security. Fostering this mutual understanding can only increase the productivity of the organization overall.
Automating being at the core of DevOps processes, this is a non-exhaustive list of possible automation for your security processes:
In today’s IT landscape, shift-left principles have emerged as a vital security strategy. They place security at the heart of software development and IT operations, detecting vulnerabilities early and reducing the risk of security breaches.
This proactive approach safeguards data and privacy and shields organizations from financial losses, legal consequences, and reputational damage.
Shift-left promotes collaboration, automation, and continuous improvement, breaking down traditional silos among development, operations, and security teams. It’s not a passing trend but a strategic necessity, ensuring the delivery of secure, resilient, and reliable software.
Do you have to help your team shift API security left? Get started with Escape now.
Want to learn more?
Check out the following articles:
Source: https://escape.tech/blog/why-does-devops-recommend-shift-left-testing-principles/
The post Why does DevOps recommend “Shift left” principles? appeared first on Coder's Jungle.
]]>Phishing remains one of the most dangerous and widespread cybersecurity threats. This blog examines the escalating phishing landscape, shortcomings of common anti-phishing approaches, and why implementing a Protective DNS service as part of a layered defense provides the most effective solution.
Phishing is now the most common initial attack vector, overtaking stolen or compromised credentials. Stolen or compromised credentials was the leading attack vector in the prior year’s report. (Source: IBM Security: Cost of a Data Breach Report 2023)
According to recent research, the number of phishing attacks vastly outpaces all other cyber threats. Business Email Compromise (BEC), a type of phishing attack, results in the greatest financial losses of any cybercrime.
In 2021 alone, estimated adjusted losses from BEC totaled $2.4 billion USD globally. This staggering figure represents more than 59 percent of the losses from the top five most costly internet crimes worldwide. These statistics highlight the immense threat posed by phishing, especially BEC attacks, to organizations across industries. (Source: Microsoft Digital Defense Report 2022)
Phishing continues to dominate the Social Engineering incident classification pattern, ensuring that email remains one of the most common and easiest means of influencing individuals in an organization (Source: 2023 Verizon Data Breach Investigations Report) These trends demonstrate how phishing remains one of the most pervasive and costly cyber threats facing businesses today.
These trends make it clear that phishing attacks are becoming increasingly threatening to businesses of all sizes. Organizations need to implement a layered security approach that includes Protective DNS to effectively protect themselves from phishing attacks.
While phishing methods are constantly evolving, common attack vectors include:
This combination of highly-tailored social engineering, stealthy technical deception, and harmful payloads allow phishing attacks to circumvent many current defenses.
Organizations employ various methods to combat phishing, but limitations remain:
1. Email filtering relies on signatures, display names, and content inspection.
2. Blacklisting URLs fail to keep pace as phishers exploit typosquatting and generate new fraudulent domains rapidly.
3. User education is unreliable when faced with highly-refined psychological manipulation tailored to override caution.
4. Multi-factor authentication (MFA) blocks unauthorized access by requiring an additional factor, but does not stop the phishing attempt itself. Users still access harmful links or attachments.
5. Business email compromise (BEC) filters focus solely on email while phishing also occurs via web, social media, search, and apps. Other vectors are missed.
These examples demonstrate the need for advanced solutions that reliably block phishing proactively at the lower level before attacks reach end users. This is where Protective DNS comes in.
A Protective DNS service can preemptively block known phishing sites and domains before requests reach them by focusing on the DNS layer which is a common thread required in most internet interactions. This prevents connections to phishing content at the source, stopping attacks earlier in the kill chain.
Key advantages of Protective DNS include:
Real-time blocking - Newly identified phishing sites and emails are blocked instantly across the protected network as they are added to the DNS filter database. No reliance on match lists, signatures, or patterns.
Identifies emerging threats faster - By leveraging our unique adversary infrastructure platform's data lake, Protective DNS services continuously analyze the web to rapidly detect phishing sites as they emerge.
Universal coverage - Blocks phishing sites regardless of vector - email links, web pages, documents, apps, search engine results, etc.
Difficult to evade - Blocking based on domain reputation prevents circumvention via display name spoofing, content changes, or social engineering.
For example, a phishing email slips past the corporate email gateway defenses. But when the embedded link is clicked, the Protective DNS service recognizes the destination domain as fraudulent based on real-time threat intelligence and blocks access. The user's device never connects to the phishing site.
This unique ability to reliably stop phishing attacks prior to interaction establishes Protective DNS as an essential anti-phishing technical control.
While Protective DNS serves as the foundation for blocking phishing proactively, incorporating additional safeguards provides defense-in-depth. This blend of human and technical measures provides overlapping protection across potential phishing vectors, including:
As phishing threats accelerate, organizations can no longer rely solely on reactive methods like email filtering, URL blacklisting, or end user discretion. Businesses need proactive technical solutions like Protective DNS to reliably block phishing at the source before attacks reach and fool users.
Anchoring your anti-phishing defenses with Protective DNS and layered security provides comprehensive protection against this dangerous and constantly evolving threat.
Guide to Protective DNS Security
AV-TEST evaluation of HYAS Protect
Want to talk to an expert to learn more about Protective DNS? Contact us today to find out what HYAS security solutions can do for your organization.
The post How to Stop Phishing Attacks with Protective DNS appeared first on Security Boulevard.
]]>A major flaw in Exim’s mail transfer agent (MTA) software has been detected that has gone without a patch for more than a year.
Researchers from Trend Micro’s Zero Day Initiative were tipped off by an anonymous researcher in June last year, about an out-of-bounds write weakness discovered in the SMTP service, BleepingComputer reported.
Exim is an MTA that runs in the background of email servers, and hackers can use it to run malware on vulnerable endpoints.
That vulnerability is being tracked as CVE-2023-42115, and can be used to crash software and corrupt valuable data, but more importantly - it can be used to run malicious code on vulnerable servers.
Exim was reportedly first notified about the flaw in June 2022, and then again in May 2023, but apparently to no avail. Given Exim’s failure to address it, Trend Micro Zero Day Initiative has now published an advisory describing the flaw, and detailing its discussion with Exim over the months.
According to BleepingComputer, MTA servers like Exim are a popular target among hackers as they can be accessed remotely and used to move into the wider corporate network. It’s also apparently the “world’s most popular MTA software, installed on more than 56% of 602,000 internet-connected mail servers” (342,000). This is mostly because it comes bundled with many popular Linux distros including Debian and Red Hat.
Three years ago, Sandworm (a Russian state-sponsored threat actor) was using a flaw found in Exim to infiltrate endpoints, the NSA warned at the time.
“The Russian actors, part of the General Staff Main Intelligence Directorate’s (GRU) Main Center for Special Technologies (GTsST), have used this exploit to add privileged users, disable network security settings, execute additional scripts for further network exploitation; pretty much any attacker’s dream access – as long as that network is using an unpatched version of Exim MTA,” the NSA said.
Via BleepingComputer
Tux the cat has been found, and Lyft has agreed to cover "all of her veterinary bills," after a Lyft driver zoomed away with the sick cat still in the car, a Lyft spokesperson told Ars.
"We’re so happy to report that Tux has been reunited with her owner, and we are focused on ensuring Tux has everything she needs right now, including covering all of her veterinary bills," Lyft's spokesperson told Ars.
Tux's story went viral online after the cat's owner, Palash Pandey, posted on X, detailing his attempts to recover his lost cat. The cat went missing on Saturday, and millions of concerned online onlookers worried she might not be recovered. But Pandey posted today that Lyft investigators helped retrieve the cat, which was found at a real estate agency in Austin, Texas.
]]>Additionally, this significant issue might allow attackers to exfiltrate sensitive information, compromise data integrity, obtain unauthorized access, and more, posing serious operational and reputational consequences.
To fix this issue, Apache NiFi’s maintainers have provided patches and upgrades.
Implementing AI-Powered Email security solutions “Trustifi” can secure your business from today’s most dangerous email threats, such as Email Tracking, Blocking, Modifying, Phishing, Account Take Over, Business Email Compromise, Malware & Ransomware
The Remote Code Execution vulnerability has a CVSS Severity Score of 8.8 [High] and is tagged as CVE-2023-34468.
According to CYFIRMA Research, by utilizing specifically crafted H2 database connection strings, the bug enables remote code execution. H2 is a widely used embedded Java-based database in Apache NiFi installations.
“Attackers could potentially take advantage of this vulnerability to execute arbitrary code on vulnerable Apache NiFi instances. This could lead to unauthorized access, data theft, or system compromise”, researchers said.
Reports mention that there are almost 2700 publicly accessible Apache Nifi that might be affected by the CVE-2023-34468 issue.
Since Apache NiFi is utilized worldwide, the effects of this vulnerability are not geographically restricted. As a result, businesses in places like North America, Europe, Asia-Pacific, and others where Apache NiFi installations are widely distributed may be vulnerable to abuse.
This vulnerability has a potential impact on healthcare, banking, government, telecommunications, and several other industries that depend on Apache NiFi for data integration and automation but not limited to.
Particularly appealing targets may be businesses handling sensitive data or those who rely significantly on Apache NiFi’s capabilities.
If this vulnerability is successfully exploited, it can allow unauthorized code execution, which could compromise the wider technical ecosystem.
Servers, applications, and interconnected systems that have been integrated with Apache NiFi are included in this, which increases the potential impact of a company’s technological infrastructure.
The researchers have discovered that unidentified hackers are selling Apache NiFi Exploits on dark web forums.
Apache NiFi – 0.0.2 through 1.21.0
Organizations utilizing these versions are in danger and need to act right away.
Protect yourself from vulnerabilities using Patch Manager Plus to quickly patch over 850 third-party applications. Take advantage of the free trial to ensure 100% security.
The post Apache NiFi RCE Vulnerability Let Attackers Exfiltrate Sensitive Data appeared first on Cyber Security News.
]]>Go to this link to read, download, and install (everything is simple there).
Installation instructions in Go can be found here:
go get -u github.com/golang/protobuf/{proto,protoc-gen-go}
\
You may need to use -f
if you have something like this in ~/.gitconfig
:
[url "ssh://git@github.com/"]
insteadOf = https://github.com/
For this example, we will save an array of numbers and a string, and then read them back. Furthermore, we will assume that we are in the root of our new project.
The proto-file will look like this:
msg/msg.proto
// comments follow a style C/C++
/*
and multiline too
*/
syntax = "proto3";
// package name, this will be saved in the resulting go-file
package msg;
// type of data to be saved
message msg {
// type field_name = field_number
string key = 1;
// repeated means slice
repeated int64 value = 2;
}
/*
In the third version, there are no required fields and extensions.
Instead of extensions, the type `Any` is implemented (more on that later)
*/
\ Now, we need to compile the proto file:
protoc --go_out=. msg/*.proto
\ The result will be a file like this:
msg/msg.pb.go
package msg
import proto "github.com/golang/protobuf/proto"
var _ = proto.Marshal
/*
The structure looks like this. Note that tags for JSON have been added automatically
*/
type Msg struct {
Key string `protobuf: "bytes,1,opt,name=key" json: "key,omitempty"`
Value []int64 `protobuf: "varint,2,rep,name=value" json: "value,omitempty"`
}
// methods are needed to make the structure conform to the proto.Message interface
func (m *Msg) Reset() { *m = Msg{} }
func (m *Msg) String() string { return proto.CompactTextString(m) }
func (*Msg) ProtoMessage() {}
func init() {
}
\ Now let's create a structure, write its bytes, and read it back:
main.go
package main
import (
"log"
"./msg"
"github.com/golang/protobuf/proto"
)
func main() {
// create a new "message"
msg1 := &msg.Msg{
Key: "Hello Protocol Buffers",
Value: []int64{1, 2, 3, 4},
}
// structure to bytes
data, err := proto.Marshal(msg1)
if err != nil {
log.Fatal("marshaling error: ", err)
return
}
// how much memory does it take?
log.Printf("data length: %d", len(data))
// bytes into the structure
msg2 := new(msg.msg)
err = proto.Unmarshal(data, msg2)
if err != nil {
log.Fatal("unmarshaling error: ", err)
}
// now both structures must be equal
if msg1.Key != msg2.Key {
log.Printf("unexpected value, expected '%s', got '%s'", msg1.Key, msg2.Key)
}
for i := 0; i < 4; i++ {
if msg1.Value[i] != msg2.Value[i] {
log.Printf("unexpected value, expected %d, got %d", msg1.Value[i], msg2.Value[i])
}
}
log.Println("Done")
}
\
As you can see, it's easy. If we dig deeper, let's say there is a desire to create a database that stores "messages" - so that the type of "message" is not initially defined, and to store these "messages" in some structure. In other words, to have a library that will store what we give it in a certain format. In proto3 type Any
is implemented to store any type.
Type Any
looks like this:
message Any {
string type_url = 1; // type
bytes value = 2; // type content in bytes
}
\
]]>A brand new malware-as-a-service (MaaS), capable of a wide range of malicious actions, is being offered on the dark web, researchers have found.
Cybersecurity experts from Zscaler ThreatLabz observed a MaaS called BunnyLoader being offered online for $250 (lifetime license).
After further analysis, the researchers discovered all of the things BunnyLoader can do - from deploying stage-two malware to stealing passwords stored in browsers to grabbing system information. Furthermore, BunnyLoader can run remote commands on the infected endpoint, capture keystrokes via an integrated keylogger, and monitor the clipboard for cryptocurrency wallets.
If a victim decides to send a cryptocurrency payment from one address to another, they’d usually copy and paste the recipient’s address in the app, mostly because wallet addresses are a long string of random letters and numbers. When malware monitors the clipboard, it can detect when the victim copies a wallet address and can replace the contents in the clipboard with an address belonging to the attacker. Thus, when a payment is initiated, the funds go to the attacker’s account.
BunnyLoader was written in C/C++ by a threat actor named PLAYER_BUNNY (aka PLAYER_BL). It is under active development since early September this year, allegedly getting new features and enhancements every day. Some of the newer ungraded include anti-sandbox and antivirus evasion techniques, made possible via a fileless loading feature.
Hackers who buy a license can also expect a C2 panel to monitor all active tasks, keep track of infection statistics, track connected and inactive hosts, and more.
The only thing that remains a mystery with BunnyLoader is how it makes it to the victim’s endpoints, as the researchers were unable to discover any initial access mechanisms.
"BunnyLoader is a new MaaS threat that is continuously evolving their tactics and adding new features to carry out successful campaigns against their targets," the researchers concluded.
Via TheHackerNews
One car has been flying the flag for electrification longer than most: the Toyota Prius.
Over four generations and more than five million examples sold in 26 years, the Prius was originally billed by its maker as ‘the car for the 21st century’, a trailblazer for more efficient driving, harnessing part-electric power in what subsequently became known as a self-charging hybrid.
The world has come around to Toyota’s view, and then some. Every car maker with any eye on a solvent future is now producing electrified models to reduce emissions (Toyota says the Prius alone has saved more than 82 million tonnes of CO2 entering the atmosphere) ultimately down towards zero. Not every car maker or car buyer is ready to go full on into electric cars, so hybrid power remains a fine bridging technology to improve efficiency and emissions by pairing electric propulsion with an internal combustion engine.
In this context, then, you’d think that the new fifth-generation Toyota Prius would be ripe for a successful launch in the UK, especially as it looks this good. But no: this Prius is not for us.
Truth is, we Brits don’t buy the Prius. When this new version was revealed at the 2022 LA motor show, Toyota highlighted that just 563 Prius models were sold in 2021 compared with just under 18,000 Toyota C-HRs. Uber drivers are being pointed in the direction of the Toyota Corolla Touring Sports.
Perhaps we didn’t buy the previous Priuses (Pri-ii?) because of that Uber reputation in recent years, and a sneeriness towards the car before that because of it being seen as a bit of a Hollywood stooge. But then it was always quite frumpy to look at, and up until the fourth-generation model a car devoid of any character to drive from the dark days of Toyota.
But just look at it now. Would we be inclined to buy this Prius? It’s sleek and sporty, rakish in profile and really rather desirable. Put it next to even a fourth-generation Prius and you’ll never have guessed the lineage, save for a very loose wedge shape.
The proportions are very different from before. This Prius is 50mm lower and 46mm shorter than the previous car but the wheelbase has increased by 50mm. It’s 22mm wider as well and looks more so than that thanks to a light bar that runs the full width of the car's front. It rides on 19in alloy wheels.
It’s all change visually and all change under the bonnet, too. While a classic series hybrid (sorry, self-charging hybrid) will be offered in some global markets based around a 2.0-litre engine, in Europe the Prius will be sold as a plug-in hybrid only. This mixes a 2.0-litre petrol engine with twin electric motors for a combined 220bhp, working in conjunction with a 13.6kWh battery that provides a 45-mile electric range.
A very nice drivetrain it is, too. For the most part, you’re able to nip around on electric power as that EV range is substantial and on a typical journey you’re unlikely to exhaust it. When you do, the engine can be a bit grumbly and the refinement drops under heavier acceleration loads; ’twas ever thus with many a hybrid and the Prius is no exception.
Still, among a sea of heavy, bulky and often bloated electric cars, the Prius feels a breath of fresh air to drive. It is based on the latest version of Toyota’s TNGA platform, which has already yielded many an everyday handling hero and the Prius is another one. Here, that platform is said to be stiffer, quieter and more stable than before.
This new Prius feels shrink-wrapped and fleet of foot, alert and nimble and keen to change direction. The chunky steering wheel feels great in your hands and the steering itself is direct and precise, backed by the Prius’s willingness to be really turned in to a corner. It’s surprisingly resistant to understeer and just good, honest, everyday fun. Drab to drive this is not, and the tidy handling is backed up with a supple ride at all speeds. Who’d have thought we’d ever say this about a Prius?
The interior lacks the wow factor of the exterior, let down by a drab steering wheel taken from the Toyota bZ4X. Better news comes from the number of physical controls on the dashboard, including those for the heating and ventilation. A large touchscreen for the infotainment atop the dash is clear, with good graphics, and a further driver display is pushed a long way back almost to the windscreen, which makes it sit nicely in your eyeline and probably saves Toyota the cost of fitting a head-up display in the process.
The sleeker profile has resulted in a trade-off in rear space for passengers, if not at the tape measure then certainly in perception as the narrower windows make it feel cosier. The rear interior door handles might need a continental taxi driver to put a sign up on the back of the front headrest saying where to locate them, assuming they didn’t buy a Corolla estate instead, of course.
Ultimately, it’s a shame UK buyers aren’t given the chance to buy the Prius when it’s this capable. Toyota is a company that rarely lacks confidence, but it’s one thing deciding not to take the Prius when looking at past performance with a very different proportion and quite another when the car has been so successfully reinvented.
Toyota Prius PHEV
Verdict 4 stars
Engine 4 cyls in line, 1987cc, petrol, plus 161bhp permanent magnet synchronous motor Power 220bhp (peak combined) Torque 140lb ft at 5200rpm (engine only) Transmission eCVT, front-wheel drive Battery 13.6kWh Kerb weight 1545kg 0-62mph 6.8sec Top speed 110mph Economy 404mpg CO2, tax band 16g/km, na Electric range 45 miles Rivals Hyundai Ioniq PHEV, Kia Niro PHEV
:::
\ HackerNoon, the independent technology publishing company, has released The Editing Protocol to the public, a set of rules and guidelines that can be used by humans and machines to determine whether a story is worth publishing, how to specifically improve the story’s content, and how to distribute the story with more reach and relevance.
\
One of the world leaders in online publishing, HackerNoon is home to over 45,000 published contributors. As a startup, the company was tasked with reviewing thousands of submissions a month with just a small team of editors. To do that, HackerNoon created a set of preliminary checks, rules, and quality guidelines that determine whether or not a story can be published, or should be rejected. This document was dubbed The Editing Protocol and can be read by humans to guide publishing at scale.
\
“In publishing a hundred thousand stories, we’ve learned best practices for how to improve and distribute professional technology content on the internet,” said HackerNoon Creator and CEO David Smooke.
\
“I’m excited to open up this technical documentation for feedback, and continue integrating the most cutting-edge technologies into the Editing Protocol.”
\ To elevate the quality of published content, and optimize the user experience for contributors and readers alike, the editing protocol incorporates several inventive technologies, such as:
\
\ The protocol’s guidelines are rule-based and can be easily converted into conditional statements. Once programmed, the protocol can allow human editors to focus on the quality and improvement of publishable stories, while the system automatically informs writers of what rule they have broken, or guidelines they have missed. With The Editing Protocol, small teams can publish content at scale, using both human editors and rule-based flagging systems to provide an efficient publishing process.
\ HackerNoon has already programmed a lot of the protocol into the custom CMS the company has built from the ground up. For instance, submissions that are below the minimum structural quality measure are automatically rejected, and writers are sent an email specifically stating how to improve the story they hope to publish before resubmitting.
\ Furthermore, there is a section in the protocol that highlights the importance of originality score. As one of the most important aspects of online publishing is the visual presentation of the story, HackerNoon built an AI image generator into their CMS, allowing writers to create original images and help writers adhere to the protocol using the tech itself.
\ HackerNoon believes the protocol will not only guide human editors to better vet stories, but also help developers automate the tedious workflows within the traditional editorial review process.
\
The full protocol can be viewed via https://editingprotocol.com/ which is hosted on HackerNoon’s custom CMS builder.
\ With The Editing Protocol, HackerNoon aims to help make the internet better by getting rid of the noise—low-quality self-published articles, spammy content riddled with even spammier links, and a horde of other bad SEO practices that prioritize clicks at the expense of quality.
\ As the Internet changes with the emergence of Web3 technologies, it is likely that The Editing Protocol, too, will change. As such, whenever rules or processes are added, removed, or updated, users can find them on editingprotocol.com, and use that domain as the ground truth.
\n
]]>