Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (12)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (4291)

  • Convert SDR-JPEG to HDR-AVIF [closed]

    27 avril 2024, par Jonas Janzen

    I would like to convert a jpg file into an avif file that is to be saved in HDR10-capable metadata (PQ curve, 2020 color space, 10 bit).

    


    The idea is to save normal SDR images in HDR-capable containers so that they can be displayed in all their glory on HDR-capable displays.

    


    I want to play with inverse tone mapping, to manipulate the output, so I implemented in Python via subprocess.

    


    So far I just want the input image to be saved in AVIF as HDR and look the same at the end as before, so that I can then make changes in the next step.

    


    I used the following command for this :

    


    ffmpeg_command = [
'ffmpeg',

    


    Input File
'-i', temp_file,

    


    Used Library
'-c', 'libaom-av1',

    


    '-still-picture', '1',

    


    Output Metadata
'-pix_fmt', 'yuv420p10le',
'-strict', 'experimental',
'-color_primaries', 'bt2020',
'-color_trc', 'smpte2084',
'-colorspace', 'bt2020nc',
'-color_range', 'pc',

    


    Output File
output_file
]

    


    So far my attempts have only been successful with the HLG characteristic. Here you can see that the images are really brighter in the peaks on my HDR monitor.

    


    With the PQ characteristic curve, the images are far too oversaturated.

    


    I guess this is because the HLG curve is compatible with the gamma curve, but PQ is not.

    


    Now my question is what I need to change.

    


    Which curve does FFMpeg expect as input.

    


    In Python I can change the images mathematically without any problems.

    


    The Example Images are again tone mapped down to jpg, to show what happened.

    


    enter image description here
enter image description here

    


  • FFmpeg / libmp3lame crash while converting from .wav to .mp3 with vibrato

    1er juillet 2020, par Chitrang

    I have integrated mobile-ffmpeg-full-gpl:4.3.1.LTS library in my android app. And trying to convert .wav file to .mp3 format with vibrato option and libmp3lame encoder.

    


    ffmpegCommand = "-i input.wav " +
                "-af vibrato=f=4 " +
                "-c:a libmp3lame " +
                "-b:a 96k " +
                "-ac 1 " +
                "-ar 44100 " +
                "-y output.mp3"


    


    FFmpeg Logs :

    


    a.b.com I/mobile-ffmpeg: Loading mobile-ffmpeg.
a.b.com I/mobile-ffmpeg: Loaded mobile-ffmpeg-full-gpl-arm64-v8a-4.3.1-lts-20200125.
a.b.com D/mobile-ffmpeg: Callback thread started.
a.b.com I/mobile-ffmpeg: ffmpeg version git-2020-01-25-fd11dd500
a.b.com I/mobile-ffmpeg:  Copyright (c) 2000-2020 the FFmpeg developers
a.b.com I/mobile-ffmpeg:   built with Android (5220042 based on r346389c) clang version 8.0.7 (https://android.googlesource.com/toolchain/clang b55f2d4ebfd35bf643d27dbca1bb228957008617) (https://android.googlesource.com/toolchain/llvm 3c393fe7a7e13b0fba4ac75a01aa683d7a5b11cd) (based on LLVM 8.0.7svn)
a.b.com I/mobile-ffmpeg:   configuration: --cross-prefix=aarch64-linux-android- --sysroot=/files/android-sdk/ndk-bundle/toolchains/llvm/prebuilt/linux-x86_64/sysroot --prefix=/home/taner/Projects/mobile-ffmpeg/prebuilt/android-arm64/ffmpeg --pkg-config=/usr/bin/pkg-config --enable-version3 --arch=aarch64 --cpu=armv8-a --cc=aarch64-linux-android21-clang --cxx=aarch64-linux-android21-clang++ --target-os=android --enable-neon --enable-asm --enable-inline-asm --enable-cross-compile --enable-pic --enable-jni --enable-optimizations --enable-swscale --enable-shared --disable-v4l2-m2m --disable-outdev=v4l2 --disable-outdev=fbdev --disable-indev=v4l2 --disable-indev=fbdev --enable-small --disable-openssl --disable-xmm-clobber-test --disable-debug --enable-lto --disable-neon-clobber-test --disable-programs --disable-postproc --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --disable-sndio --disable-schannel --disable-securetransport --disable-xlib --disable-cuda --disable-cuvid --disa
a.b.com I/mobile-ffmpeg:   libavutil      56. 38.100 / 56. 38.100
a.b.com I/mobile-ffmpeg:   libavcodec     58. 65.102 / 58. 65.102
a.b.com I/mobile-ffmpeg:   libavformat    58. 35.101 / 58. 35.101
a.b.com I/mobile-ffmpeg:   libavdevice    58.  9.103 / 58.  9.103
a.b.com I/mobile-ffmpeg:   libavfilter     7. 70.101 /  7. 70.101
a.b.com I/mobile-ffmpeg:   libswscale      5.  6.100 /  5.  6.100
a.b.com I/mobile-ffmpeg:   libswresample   3.  6.100 /  3.  6.100
a.b.com W/mobile-ffmpeg: [wav @ 0x7294a86600] Estimating duration from bitrate, this may be inaccurate
a.b.com W/mobile-ffmpeg: Guessed Channel Layout for Input Stream #0.0 : mono
a.b.com I/mobile-ffmpeg: Input #0, wav, from '/data/user/0/a.b.com/cache/creation/input.wav':
a.b.com I/mobile-ffmpeg:   Duration: 
a.b.com I/mobile-ffmpeg: 00:00:07.15
a.b.com I/mobile-ffmpeg: , bitrate: 
a.b.com I/mobile-ffmpeg: 705 kb/s
a.b.com I/mobile-ffmpeg:     Stream #0:0
a.b.com I/mobile-ffmpeg: : Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
a.b.com I/mobile-ffmpeg: Stream mapping:
a.b.com I/mobile-ffmpeg:   Stream #0:0 -> #0:0
a.b.com I/mobile-ffmpeg:  (pcm_s16le (native) -> mp3 (libmp3lame))
a.b.com I/mobile-ffmpeg: Press [q] to stop, [?] for help
a.b.com I/mobile-ffmpeg: Output #0, mp3, to '/data/user/0/a.b.com/cache/creation/output.mp3':
a.b.com I/mobile-ffmpeg:   Metadata:
a.b.com I/mobile-ffmpeg:     TSSE            : 
a.b.com I/mobile-ffmpeg: Lavf58.35.101
a.b.com I/mobile-ffmpeg:     Stream #0:0
a.b.com I/mobile-ffmpeg: : Audio: mp3 (libmp3lame), 44100 Hz, mono, fltp, 96 kb/s
a.b.com I/mobile-ffmpeg:     Metadata:
a.b.com I/mobile-ffmpeg:       encoder         : 
a.b.com I/mobile-ffmpeg: Lavc58.65.102 libmp3lame
a.b.com I/mobile-ffmpeg: --------- beginning of crash
a.b.com A/libc: psymodel.c:576: void calc_energy(const PsyConst_CB2SB_t *, const FLOAT *, FLOAT *, FLOAT *, FLOAT *): assertion "el >= 0" failed
a.b.com A/libc: Fatal signal 6 (SIGABRT), code -6 (SI_TKILL) in tid 25800 (a.b.com), pid 25800 (a.b.com)


    


    Crash :

    


    --------- beginning of crash
 A/libc: psymodel.c:576: void calc_energy(const PsyConst_CB2SB_t *, const FLOAT *, FLOAT *, FLOAT *, FLOAT *): assertion "el >= 0" failed

? A/DEBUG: *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
? A/DEBUG: Build fingerprint: 'samsung/star2qltecs/star2qltecs:10/QP1A.190711.020/G965WVLS7DTE1:user/release-keys'
? A/DEBUG: Revision: '14'
? A/DEBUG: ABI: 'arm64'
? A/DEBUG: Timestamp: 2020-06-29 15:13:17-0400
? A/DEBUG: pid: 1849, tid: 1849, name: a.b.com  >>> a.b.com <<<
? A/DEBUG: uid: 12171
? A/DEBUG: signal 6 (SIGABRT), code -6 (SI_TKILL), fault addr --------
? A/DEBUG: Abort message: 'psymodel.c:576: void calc_energy(const PsyConst_CB2SB_t *, const FLOAT *, FLOAT *, FLOAT *, FLOAT *): assertion "el >= 0" failed'
? A/DEBUG:     x0  0000000000000000  x1  0000000000000739  x2  0000000000000006  x3  0000007fd65f7bb0
? A/DEBUG:     x4  0000000000000000  x5  0000000000000000  x6  0000000000000000  x7  0000000000000008
? A/DEBUG:     x8  00000000000000f0  x9  7f96d7a39856d151  x10 0000000000000001  x11 0000000000000000
? A/DEBUG:     x12 fffffff0fffffbdf  x13 000000005efa3d4c  x14 001c23c1a79207f5  x15 000079d970d48db2
? A/DEBUG:     x16 00000073e009e8c0  x17 00000073e007afe0  x18 00000073e492c000  x19 0000000000000739
? A/DEBUG:     x20 0000000000000739  x21 00000000ffffffff  x22 0000007fd65fc44c  x23 0000007fd65f8640
? A/DEBUG:     x24 0000007fd65fd120  x25 0000007fd65fd3a8  x26 0000007fd65f8240  x27 0000007fd65f9e40
? A/DEBUG:     x28 00000071f9f60900  x29 0000007fd65f7c50
? A/DEBUG:     sp  0000007fd65f7b90  lr  00000073e002c27c  pc  00000073e002c2a8
? A/DEBUG: backtrace:
? A/DEBUG:       #00 pc 00000000000832a8  /apex/com.android.runtime/lib64/bionic/libc.so (abort+160) (BuildId: 55ce0a7d78144b0290f9746ed1615719)
? A/DEBUG:       #01 pc 00000000000839e8  /apex/com.android.runtime/lib64/bionic/libc.so (__assert2+36) (BuildId: 55ce0a7d78144b0290f9746ed1615719)
? A/DEBUG:       #02 pc 0000000000969c60  /data/app/a.b.com-jXqE8oxytEkfSsn6pcdloQ==/lib/arm64/libavcodec.so


    


    I referred link1, link2 to understand the problem but could not find a solution.

    


  • How to make icecast relay the metadata of mp3 stream from ffmpeg ?

    8 janvier 2021, par Anonymous Mouse

    I'm trying to relay a internet radio stream with the command :

    


    ffmpeg -re -i 'http://internetradiourl'-vn -acodec copy -f tee -map 0:a "[f=mp3]icecast://source:pass@icecast:8000/a|[f=mp3]icecast://source:pass@icecast:8000/b"


    


    the output stream has all the metadata :

    


    Output #0, tee, to '[f=mp3]icecast://source:pass@icecast:8000/a|[f=mp3]icecast://source:pass@icecast:8000/b':
  Metadata:
    audiocodec      : MP3
    audiodevice     : IN  1+2 (Digigram LoLa280)
    audiodeviceid   : e08a462b-2944-4abf-894a-5d5e8649452d
    type            : event
    encoderMachineName: ERADIO
    hasAudio        : true
    hasCuePoints    : true
    hasKeyframes    : true
    hasVideo        : false
    source          : MediaProxy 4.4.13.6
    name            : ad
    ad_type         : endbreak
    cue_id          : b6c7ff39-2103-4442-bdc8-35f8ba2bb97d
    cue_time_duration: 44.365
    cue_time_start  : 1610035039552
    cue_title       : PSA - Danny - Parking (Jun 2020)
    encoder         : Lavf58.45.100
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 98304 kb/s
Stream mapping:
  Stream #0:1 -> #0:0 (copy)


    


    but if a client connects to the icecast server, the metadata is lost :
ffprobe https://icecasturl/a

    


    Input #0, mp3, from 'https://icecasturl/a':
  Metadata:
    icy-pub         : 0
    icy-metadata    : 1
    StreamTitle     :
  Duration: N/A, start: 0.000000, bitrate: 96 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 96 kb/s


    


    How can I configure icecast so that it forwards all the metadata of the source stream ?