Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (105)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (9443)

  • Parsing custom track data from ARCore mp4 recordings

    11 juillet 2022, par George Ellickson

    I'm using the Android ARCore Recording API to record custom data per frame and replay those values in tests to instrument test our ARCore functionality on devices and in CI. However, separately I'd also like to parse the generated mp4 recordings myself, outside of ARCore, and use my per frame recorded data for analysis. In my ARCore app, I'd like to simply be able to add custom text data like following, encoded as utf-8 strings (or really any other simple encoding) for the given ARCore frame :

    


    val data = "Hello world!"
val buffer = ByteBuffer.wrap(data.encodeToByteArray())
frame.recordTrackData(TRACK_UUID_MY_DATA, buffer


    


    I can't find any docs or good examples though of parsing the mp4 from ARCore and no luck in their arcore-android-sdk repo either. I've tried ffmpeg / ffprobe to figure out how the data is bundled into the MP4, but I'm stumped on which track to use and how best to deserialize as I'm unsure how the bytes are actually encoded under the hood.

    


    Using ffmpeg, I just get information like this about the tracks :

    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'HoverCapture/ar-recording-tests/src/androidTest/res/raw/ar_recording_2_photos.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2022-01-20T22:03:13.000000Z
  Duration: 00:00:17.12, start: 0.000000, bitrate: 26865 kb/s
  Stream #0:0[0x1](und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(tv, unknown/bt470bg/unknown, progressive), 1920x1080, 25615 kb/s, 28.08 fps, 29.58 tbr, 90k tbn (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.
      vendor_id       : [0][0][0][0]
  Stream #0:1[0x2](und): Data: none (mett / 0x7474656D), 18 kb/s (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.
  Stream #0:2[0x3](und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(tv, unknown/bt470bg/unknown, progressive), 640x480, 1929 kb/s, 28.12 fps, 29.42 tbr, 90k tbn (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.
      vendor_id       : [0][0][0][0]
  Stream #0:3[0x4](und): Data: none (mett / 0x7474656D), 18 kb/s (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.
  Stream #0:4[0x5](und): Data: none (mett / 0x7474656D), 54 kb/s (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.
  Stream #0:5[0x6](und): Data: none (mett / 0x7474656D), 54 kb/s (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.
  Stream #0:6[0x7](und): Data: none (mett / 0x7474656D), 0 kb/s (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.
  Stream #0:7[0x8](und): Data: none (mett / 0x7474656D), 6 kb/s (default)
    Metadata:
      creation_time   : 2022-01-20T22:03:13.000000Z
      handler_name    : ISO Media file produced by Google Inc. Created on: 01/20/2022.


    


  • FFMPEG missing audio when remuxing AVI to MP4

    24 juillet 2022, par Joba

    I want to convert from .avi to .mp4 without any quality impact.

    


    When I use ffmpeg cli to remux the file the output file is missing the audio.

    


    


    ffmpeg -i input.mpg.avi -c copy output.mp4

    


    


    ➜  Desktop ffmpeg -i input.mpg.avi -c copy output.mp4
ffmpeg version 5.0.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with Apple clang version 13.1.6 (clang-1316.0.21.2.5)
  configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/5.0.1_3 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-neon
  libavutil      57. 17.100 / 57. 17.100
  libavcodec     59. 18.100 / 59. 18.100
  libavformat    59. 16.100 / 59. 16.100
  libavdevice    59.  4.100 / 59.  4.100
  libavfilter     8. 24.100 /  8. 24.100
  libswscale      6.  4.100 /  6.  4.100
  libswresample   4.  3.100 /  4.  3.100
  libpostproc    56.  3.100 / 56.  3.100
Input #0, avi, from 'input.mpg.avi':
  Metadata:
    software        : transcode-1.0.6
  Duration: 01:45:46.56, start: 0.000000, bitrate: 2055 kb/s
  Stream #0:0: Video: mpeg4 (Simple Profile) (DX50 / 0x30355844), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 1915 kb/s, 25 fps, 25 tbr, 25 tbn
  Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, stereo, fltp, 128 kb/s
Output #0, mp4, to 'output.mp4':
  Metadata:
    software        : transcode-1.0.6
    encoder         : Lavf59.16.100
  Stream #0:0: Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 640x360 [SAR 1:1 DAR 16:9], q=2-31, 1915 kb/s, 25 fps, 25 tbr, 12800 tbn
  Stream #0:1: Audio: mp3 (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame=158664 fps=142852 q=-1.0 Lsize= 1587337kB time=01:45:46.55 bitrate=2048.9kbits/s speed=5.71e+03x    
video:1483615kB audio:99165kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.287948%


    


  • FFMPEG HTTP Stream error, failed : Connection refused

    17 juillet 2022, par Halo Gass

    I tried to stream a video frame from opencv to network using MPEG-Dash, HLS, or RTSP via FFMPEG, but everytime I tried everything, it always throw "Connection Error, Connection Refused" even streaming to 127.0.0.1.

    


    Here the code for testing :

    


    import subprocess
import cv2
rtmp_url = "rtmp://127.0.0.1:1935/stream/pupils_trace"

# webcamera is 0, also you can set a video file name instead, for example "/home/user/demo.mp4"
path = 0
cap = cv2.VideoCapture(path)

# gather video info to ffmpeg
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

# command and params for ffmpeg
command = ['ffmpeg',
           '-y',
           '-f', 'rawvideo',
           '-vcodec', 'rawvideo',
           '-pix_fmt', 'bgr24',
           '-s', "{}x{}".format(width, height),
           '-r', str(fps),
           '-i', '-',
           '-c:v', 'libx264',
           '-pix_fmt', 'yuv420p',
           '-preset', 'ultrafast',
           '-f', 'flv',
           rtmp_url]

# using subprocess and pipe to fetch frame data
p = subprocess.Popen(command, stdin=subprocess.PIPE)


while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        print("frame read failed")
        break

    # YOUR CODE FOR PROCESSING FRAME HERE

    # write to pipe
    p.stdin.write(frame.tobytes())


    


    and below is the log :

    


    ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers&#xA;  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)&#xA;  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;Input #0, rawvideo, from &#x27;pipe:&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 221184 kb/s&#xA;    Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 221184 kb/s, 10 tbr, 10 tbn, 10 tbc&#xA;[tcp @ 0x556bea198680] Connection to tcp://127.0.0.1:1935 failed: Connection refused&#xA;[rtmp @ 0x556bea1a2640] Cannot open connection tcp://127.0.0.1:1935&#xA;rtmp://127.0.0.1:1935/stream/pupils_trace: Connection refused&#xA;Traceback (most recent call last):&#xA;  File "testing.py", line 42, in <module>&#xA;    p.stdin.write(frame.tobytes())&#xA;</module>

    &#xA;

    Here what I tried (All of it always "Connection Refused")

    &#xA;

      &#xA;
    1. Running the code directly
    2. &#xA;

    3. Running the code using sudo
    4. &#xA;

    5. Run with 'sudo su'
    6. &#xA;

    7. Allow the port used in ufw
    8. &#xA;

    9. Disable ufw
    10. &#xA;

    11. I tried executing command directly in terminal, also throws "connection refused"
    12. &#xA;

    13. using 127.0.0.1 or "localhost" or 0.0.0.0 or my local IP, everything throws connection error
    14. &#xA;

    15. Tried UDP and TCP
    16. &#xA;

    &#xA;

    Anyone can help me ?&#xA;Thanks

    &#xA;