Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (57)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • MediaSPIP en mode privé (Intranet)

    17 septembre 2013, par

    À partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
    Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
    Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...)

Sur d’autres sites (8904)

  • Evolution #4749 (Nouveau) : [UX] Comportement des labels : quoi par défaut, quoi ponctuel ?

    27 avril 2021, par RastaPopoulos ♥

    Ce ticket sert à réfléchir et possiblement reconcevoir les choix par défaut pour les labels des formulaires.

    État des lieux

    On le sait, l’ergo c’est normalement beaucoup d’objectif : certains placements, certaines tailles, épaisseurs, etc fonctionnent mieux que d’autres, et ceci est prouvable par tests utilisateurs.

    Or cela fait des années que les tests par eye-tracking montrent que les formulaires sont
    1) lu plus rapidement
    2) avec une meilleure compréhension
    lorsque les labels sont au-dessus des champs.

    Ça ne veut pas dire qu’il faut totalement interdire autrement mais : ça veut clairement dire que ça devrait être le comportement par défaut. Et seulement ponctuellement, par choix explicite, pouvoir mettre les labels sur le côté.

    Par ailleurs les pros de l’ergo (sur base des mêmes tests) préconisent tou⋅tes : dans les rares cas où on met les labels sur le côté, ça devrait être calé à droite sur le champ, pour les mêmes raisons de compréhension.

    Les avantages des labels au-dessus :
    - prouvé que c’est bien mieux compris par tout le monde
    - lecture plus rapide
    - fonctionne de base sur tous les écrans, pas d’adaptation à faire
    - polyvalent et générique sur le contenu des labels : marche mieux quelque soit la longueur, et donc à prioriser dans un contexte multilingue
    => cela correspond bien au maximum de notre utilisation : un CMS multi-lingue, allant enfin vers le responsive, se souciant d’accessibilité.

    Le seul désavantage : allonge la hauteur des formulaires, mais ça n’a un impact surtout que pour les formulaires ayant vraiment vraiment beaucoup de champs, ce qui est rare !
    Quand un formulaire est extrêmement long, il y a même plusieurs méthodes qui peuvent être utilisées sans pour autant passer les labels sur le côté :
    1) placer certains champs sur le même ligne (prénom + nom, etc)
    2) découper le formulaire en plusieurs étapes.

    Proposition pour le futur

    - tous les labels doivent être au-dessus comme comportement par défaut
    - pour certains cas, une classe permet de mettre sur le côté : valable uniquement en grand écran, ça reste au-dessus en mobile first
    - si sur le côté : c’est mieux si aligné sur le champ (donc à droite en LTR)
    - ex de rare formulaire candidat : changement des dates

    Quelques sources

    Tests utilisateurs
    https://www.uxmatters.com/mt/archives/2006/07/label-placement-in-forms.php

    Placing a label above an input field works better in most cases
    Placing labels above input fields is preferable
    In most cases, when placing labels to the left of input fields, using left-aligned labels imposes a heavy cognitive workload on users
    if you choose to place them to the left of input fields, at least make them right aligned

    Chez le très connu cabinet d’ergo Nielsen Group
    https://www.nngroup.com/articles/form-design-white-space/

    We recommend placing field labels above the corresponding text fields [en gras chez eux !]
    it makes the form easier to scan, because users can see the text field in the same fixation as the label. Top placement also allows for longer field labels
    If the labels are too far to the left, it can be difficult to associate the correct label with its corresponding field

    Chez Adobe, ils préconisent de suivre les recommandations de la première source
    https://xd.adobe.com/ideas/principles/web-design/best-practices-form-design/

    Matteo Penzo’s 2006 article on label placement suggests that forms are completed faster if labels are on top of the fields. Top-aligned labels are good if you want users to scan the form as quickly as possible.
    The biggest advantage of top-aligned labels is that different-sized labels and localized versions can more easily fit the UI.
    Takeaway : If you want users to scan a form quickly, put labels above the fields. The layout will be easier to scan because the eye will move straight down the page. However, if you want users to read carefully, put labels to the left of the fields. This layout will slow down the reader and make them scan in a Z-shaped motion.

    Chez une appli de conception d’interface
    https://phase.com/magazine/usability-of-forms/

    from a cognitive point of view, the association is powerful
    Also, the eyes move only in one direction since the scanning is top down as compared to Z shape (left-right and top-bottom) for inline labels
    Design is space efficient and hence adaptable to all resolutions ; in short, responsive in nature
    We also get flexibility regarding the length of labels. This proves useful while working with variable label lengths like multilingual support for applications
    One drawback of this approach is the increased height of the form. However, it can be solved with alternate designs like a grouping of fields or stepper forms

  • Elacarte Presto Tablets

    14 mars 2013, par Multimedia Mike — General

    I visited an Applebee’s restaurant this past weekend. The first thing I spied was a family at a table with what looked like a 7-inch tablet. It’s not an uncommon sight. However, as I moved through the restaurant, I noticed that every single table was equipped with such a tablet. It looked like this :


    ELaCarte's Presto Tablet

    For a computer nerd like me, you could probably guess that I was be far more interested in this gadget than the cuisine. The thing said “Presto” on the front and “Elacarte” on the back. Putting this together, we get the website of Elacarte, the purveyors of this restaurant tablet technology. Months after the iPad was released on 2010, I remember stories about high-end restaurants showing their wine list via iPads. This tablet goes well beyond that.

    How was it ? Well, confusing, mostly. The hostess told us we could order through the tablet or through her. Since we already knew what we wanted, she just manually took our order and presumably entered it into the system. So, right away, the question is : Do we order through a human or through a computer ? Or a combination ? Do we have to use the tablet if we don’t want to ?

    Hardware
    When picking up the tablet, it’s hard not to notice that it is very heavy. At first, I suspected that it was deliberately weighted down as some minor attempt at an anti-theft measure. But then I remembered what I know about power budgets of phones and tablets– powering the screen accounts for much of the battery usage. I realized that this device needs to drive the screen for about 14 continuous hours each day. I.e., the weight must come from a massive battery.

    The screen is good. It’s a capacitive touchscreen, so nice and responsive. When I first spied the device, I felt certain it would be a resistive touchscreen (which is more accurately called a touch-and-press-down screen). There is an AC adapter on the side of the tablet. This is the only interface to the device :


    ELaCarte Presto Tablet -- view of adapter

    That looks to me like an internal SATA connector (different from an eSATA connector). Foolishly, I didn’t have a SATA cable on me so I couldn’t verify.

    User Interface
    The interface options are : Order, Games, Neighborhood, and Pay. One big benefit of accessing the menu through the Order option is that each menu item can have a picture. For people who order more by picture than text description, this is useful. Rather, it would be, if more items had pictures. I’m not sure there were more pictures than seen in the print menu.

    For Games, there were a variety of party games. The interface clearly stated that we got to play 2 free games. This implied to me that further games cost money. We tried one game briefly and the food came.

    2 more options : Neighborhood– I know I dug into this option, but I forget what it was. Maybe it discussed local attractions. Finally, Pay. This thing has an integrated credit card reader. There is no integrated printer, though, so if you want one, you will have to request one from a human.

    Experience
    So we ordered through a human since we didn’t feel like being thrust into this new paradigm when we just wanted lunch. The staff was obviously amenable to that. However, I got a chance to ask them a lot of questions about the particulars. Apparently, they have had this system for about 5 months. It was confirmed that the tablets do, in fact, have gargantuan batteries that have to last through the restaurant’s entire business hours. Do they need to be charged every night ? Yes, they do. But how ? The staff described this several large charging blocks with many cables sprouting out. Reportedly, some units still don’t make it through the entire day.

    When it was time to pay, I pressed the Pay button on the interface. The bill I saw had nothing in common with what we ordered (actually, it was cheaper, so perhaps I should have just accepted it). But I pointed it out to a human and they said that this happens sometimes. So they manually printed my bill. There was a dollar charge for the game that was supposed to be free. I pointed this out and they removed it. It’s minor, I know, but it’s still worth trying to work out these bugs.

    One of the staff also described how a restaurant doesn’t need to employ as many people thanks to the tablet. She gave a nervous, awkward, self-conscious laugh when she said this. All I could think of was this Dilbert comic strip in which the boss realizes that his smartphone could perform certain key functions previously handled by his assistant.

    Not A New Idea
    Some people might think this is a totally new concept. It’s not. I was immediately reminded of my university days in Boulder, Colorado, USA, circa 1997. The local Taco Bell and Arby’s restaurants both had touchscreen ordering kiosks. Step up, interact with the (probably resistive) touchscreen, get a number, and step to the counter to change money, get your food, and probably clarify your order because there is only so much that can be handled through a touchscreen.

    What I also remember is when they tore out those ordering kiosks, also circa 1997. I don’t know the exact reason. Maybe people didn’t like them. Maybe there were maintenance costs that made them not worth the hassle.

    Then there are the widespread self-checkout lanes in grocery stores. Personally, I like those, though I know many don’t. However, this restaurant tablet thing hasn’t won me over yet. What’s the difference ? Perhaps that automated lanes at grocery stores require zero external assistance– at least, if you do everything correctly. Personally, I work well with these lanes because I can pretty much guess the constraints of the system and I am careful not to confuse the computer in any way. Until they deploy serving droids, or at least food conveyors, there still needs to be some human interaction and I think the division between the human and computer roles is unintuitive in the restaurant case.

    I don’t really care to return to the same restaurant. I’ll likely avoid any other restaurant that has these tablets. For some reason, I think I’m probably supposed to be the ideal consumer of this concept. But the idea will probably perform all right anyway. Elacarte’s website has plenty of graphs demonstrating that deploying these tablets is extremely profitable.

  • FFmpeg RTSP drop rate increases when frame rate is reduced

    13 avril 2024, par Avishka Perera

    I need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].

    


    I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.

    


    import time
import numpy as np
import subprocess

width, height = 640, 480
fps = 25
rtsp_server_address = f"rtsp://localhost:8554/mystream"

ffmpeg_cmd = [
    "ffmpeg",
    "-re",
    "-f",
    "rawvideo",
    "-pix_fmt",
    "rgb24",
    "-s",
    f"{width}x{height}",
    "-i",
    "-",
    "-r",
    str(fps),
    "-avoid_negative_ts",
    "make_zero",
    "-vcodec",
    "libx264",
    "-threads",
    "4",
    "-f",
    "rtsp",
    rtsp_server_address,
]
colors = np.array(
    [
        [255, 0, 0],
        [0, 255, 0],
        [0, 0, 255],
    ]
).reshape(3, 1, 1, 3)
images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)

if __name__ == "__main__":

    process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)
    start = time.time()
    exported = 0
    while True:
        exported += 1
        next_time = start + exported / fps
        now = time.time()
        if next_time > now:
            sleep_dur = next_time - now
            time.sleep(sleep_dur)

        image = images[exported % 3]
        image_bytes = image.tobytes()

        process.stdin.write(image_bytes)
        process.stdin.flush()

    process.stdin.close()
    process.wait()


    


    The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a BrokenPipeError: [Errno 32] Broken pipe. Refer to the appendix for the complete log.

    


    As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.

    


    References

    


    [1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx

    


    [2] FFmpeg : https://github.com/FFmpeg/FFmpeg

    


    [3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv

    


    Appendix

    


    Creating the source stream

    


    To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.

    


    ffmpeg -video_size 1280x720 -i /dev/video0  -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam


    


    Error log

    


    ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)&#xA;  configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c&#x2B;&#x2B; --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Input #0, rawvideo, from &#x27;fd:&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 184320 kb/s&#xA;  Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))&#xA;[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit&#xA;[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, rtsp, to &#x27;rtsp://localhost:8554/mystream&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf60.16.100&#xA;  Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc60.31.102 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe   &#xA;[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet&#xA;[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;frame=    1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x    &#xA;[libx264 @ 0x5e2ef8b01340] frame I:16    Avg QP: 6.00  size:   147&#xA;[libx264 @ 0x5e2ef8b01340] frame P:17    Avg QP: 9.94  size:   101&#xA;[libx264 @ 0x5e2ef8b01340] frame B:17    Avg QP: 9.94  size:    64&#xA;[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0%  0.0% 42.0%  8.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb I  I16..4: 81.3% 18.7%  0.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb P  I16..4: 52.9%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:47.1%&#xA;[libx264 @ 0x5e2ef8b01340] mb B  I16..4:  0.0%  5.9%  0.0%  B16..8:  0.1%  0.0%  0.0%  direct: 0.0%  skip:94.0%  L0:56.2% L1:43.8% BI: 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%&#xA;[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97%  0%  3%  0%&#xA;[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%&#xA;[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9%  0.0%  0.0% 11.1%&#xA;[libx264 @ 0x5e2ef8b01340] kb/s:8.27&#xA;Conversion failed!&#xA;Traceback (most recent call last):&#xA;  File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>&#xA;    process.stdin.write(image_bytes)&#xA;BrokenPipeError: [Errno 32] Broken pipe&#xA;</module>

    &#xA;