Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (61)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (12047)

  • FFMPEG - subtitles not showing for the full duration of video

    26 juin 2023, par Caio Maia

    I could not archive the following :

    


      

    1. Have images loaded from a textfile to create a slideshow.
    2. 


    3. Have a background music with volume control.
    4. 


    5. Have my voice.mp3 over the bg music.
    6. 


    7. Have subtitles in the ass format.
    8. 


    9. Have a text shown with drawtext in the full duration of video.
    10. 


    11. in only one command, if possible.
    12. 


    


    The command I tryed is :

    


    ffmpeg.exe -f concat -i images.txt -i bg_music.m4a -i voice.mp3 -filter_complex "[0:v]drawtext=fontfile='fonte.TTF':fontsize=20:fontcolor=white:text='Imagens da internet':x=w-tw-10:y=h-th-10,[0]overlay=10:10,ass=subtitles.txt[out],[1]volume=0.3[a1];[2]volume=2[a2];[a1][a2]amix=inputs=2:duration=shortest[aud]" -map "[out]" -map "[aud]":a -pix_fmt yuv420p -c:v libx264 -c:s mov_text -r 30 -y out.mp4


    


    It works but not for subtitles that are showing only after the first image of th slideshow appears.

    


    the content of images.txt is :

    


    file 'image1.png'
duration 20
file 'image2.png'
duration 5
file 'image3.png'
duration 5


    


    the content of subtitles.txt is

    


    Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
Dialogue: 0,0:00:01.00,0:00:06.00,Default,,0,0,0,,Subscribe!
Dialogue: 0,0:00:07.00,0:00:16.00,Default2,,0,0,0,,Like!
Dialogue: 0,0:00:17.00,0:00:26.00,Default,,0,0,0,,Share!


    


    The problem is that only the "Share !" text is shown.

    


  • Capture full-range/lossless rgb frame from capture card that supports NV12 and YUYV output

    13 janvier 2023, par kunal joshi

    I am trying to make a program which captures an image, then i need to compare captured image and the input data which i displayed, both should matc pixel by pixel

    


    Here are the details of my capture card

    


    $ v4l2-ctl —list-formats-ext -d /dev/video0

    


    ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'NV12' (Y/CbCr 4:2:0)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)
                Size: Discrete 2560x1440
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.017s (60.000 fps)
        [1]: 'YUYV' (YUYV 4:2:2)
                Size: Discrete 2560x1440
                        Interval: Discrete 0.020s (50.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.017s (60.000 fps)
        [2]: '' (30313050-0000-0010-8000-00aa003)
        [3]: '' (e436eb7e-524f-11ce-9f53-0020af0)



    


    $ v4l2-ctl —all

    


    Driver Info:
        Driver name      : uvcvideo
        Card type        : ITE HDMI 4K+ Bridge: ITE HDMI 4
        Bus info         : usb-0000:00:14.0-6
        Driver version   : 5.18.0
        Capabilities     : 0x84a00001
                Video Capture
                Metadata Capture
                Streaming
                Extended Pix Format
                Device Capabilities
        Device Caps      : 0x04200001
                Video Capture
                Streaming
                Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
        Width/Height      : 1920/1080
        Pixel Format      : 'YUYV' (YUYV 4:2:2)
        Field             : None
        Bytes per Line    : 3840
        Size Image        : 4147200
        Colorspace        : sRGB
        Transfer Function : Rec. 709
        YCbCr/HSV Encoding: Rec. 709
        Quantization      : Default (maps to Limited Range)
        Flags             :
Crop Capability Video Capture:
        Bounds      : Left 0, Top 0, Width 1920, Height 1080
        Default     : Left 0, Top 0, Width 1920, Height 1080
        Pixel Aspect: 1/1
Selection Video Capture: crop_default, Left 0, Top 0, Width 1920, Height 1080, Flags:
Selection Video Capture: crop_bounds, Left 0, Top 0, Width 1920, Height 1080, Flags:
Streaming Parameters Video Capture:
        Capabilities     : timeperframe
        Frames per second: 60.000 (60/1)
        Read buffers     : 0




    


    I have tried using various methods opencv but ffmpeg came the closest

    


    With below command i am able to get good results but not what i want

    


    ffmpeg -y -f v4l2 -pix_fmt NV12 -video_size 1920x1080 -i /dev/video0 -pix_fmt bgra -frames:v 10 webcam%03d.bmp


    


    Reference Image

    


    RGB of Reference image

    


    RGB of captured image

    


    Note :- I am able to capture fine with Aforge on windows, but not with ffmpeg on linux.
Would like to know if anyone has already got solution to this.

    


    Thanks in advance.

    


  • How can ffmpeg concat MP3s with full metadata incl. cover art ?

    13 décembre 2022, par TEN

    Audio books inconveniently split into dozens of MP3s (with spaces in their names) should be merged into one MP3 in a subdirectory (in which ffmpeg version 4.2.7-0ubuntu0.1 is invoked), without time-consuming and possibly degrading conversions, reliably preserving all metadata incl. cover art (present and similar in all MP3s of a title, their differences being significant only in lengths and track numbers).

    


    However, rather than picking the latter from the first input MP3, the https://trac.ffmpeg.org/wiki/Concatenate#protocol loses the cover art, the https://trac.ffmpeg.org/wiki/Concatenate#demuxer documented as more flexible even loses all metadata :

    


    ffmpeg -v verbose -f concat -safe 0 -i <(printf "file '$PWD/%s'\n" ../in\ track*.mp3) -c copy "out.mp3"
...
Input #0, concat, from '/dev/fd/63':
Duration: N/A, start: 0.000000, bitrate: 192 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
Stream #0:1: Video: png, 1 reference frame, rgba(pc), 300x300, 90k tbr, 90k tbn, 90k tbc
Metadata:
title           : 12ae3b8152eaf255ae0315c59400c540.png
comment         : Cover (front)
...
Output #0, mp3, to 'out.mp3':
Metadata:
TSSE            : Lavf58.29.100
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
...
[AVIOContext @ 0x561459f3dac0] Statistics: 1958050 bytes read, 0 seeks
[mp3 @ 0x561459f3f900] Skipping 0 bytes of junk at 110334.
[mp3 @ 0x561459f3f900] Estimating duration from bitrate, this may be inaccurate
No more output streams to write to, finishing.
size=   75793kB time=00:53:03.12 bitrate= 195.1kbits/s speed= 636x
video:0kB audio:75793kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000865%
Input file #0 (/dev/fd/63):
Input stream #0:0 (audio): 121847 packets read (77611658 bytes);
Input stream #0:1 (video): 40 packets read (4358440 bytes);
Total: 121887 packets (81970098 bytes) demuxed
Output file #0 (out.mp3):
Output stream #0:0 (audio): 121847 packets muxed (77611658 bytes);
Total: 121847 packets (77611658 bytes) muxed
[AVIOContext @ 0x561459ef6700] Statistics: 2 seeks, 298 writeouts
[AVIOContext @ 0x561459f39e40] Statistics: 2006324 bytes read, 0 seeks
[AVIOContext @ 0x561459ee0300] Statistics: 5040 bytes read, 0 seek


    


    The metadata incl. cover PNG as detected (as single-frame "video") should end up in the output MP3, but doesn't (even when adding -movflags use_metadata_tags possibly intended for other formats).

    


    -metadata track="1/1" (or without the /1 ?) may be required as the first input MP3 sometimes wrongly starts at a higher number.

    


    How do I make sure no metadata (incl. image) other than track numbers is lost when concatenating MP3s (by protocol or demuxer, from a set of input files with spaces in their names and a wildcard to match across track numbers) ?