Recherche avancée

Médias (1)

Mot : - Tags -/framasoft

Autres articles (60)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

Sur d’autres sites (11026)

  • Suppress black margins on the sides of an animation

    26 avril 2018, par Clinton Winant

    I need to make an animation out of a collection of jpeg images. The image size, as given by display, is 1200x900. I can control the size of the jpg images with convert, but not sure what a good size would be. I use the following ffmpeg call :

    ffmpeg -f image2 -i img%04d.jpg -r 24  sound.avi

    In spite of a long string of warnings like

    Past duration 0.879997 too large

    sound.avi is produced, however the animation includes black right and left margins (see attached screen shot of the first frame)

    totem - frame 1

    that I need to suppress. I am under the impression that the 4x3 format of the jpg images is standard ? I view the animation with

    mplayer sound.avi

    The OS is Debian buster

    Further experiments suggest that the black margins disappear if the jpg files have an aspect ratio 16:9. Is that the only AR possible ?

    The output of

    ffmpeg -f image2 -i img%04d.jpg -vf cropdetect -vframes 5 -f null

    requested by @Gyan is

    ffmpeg version 3.4.2-2 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 7 (Debian 7.3.0-15)
     configuration: --prefix=/usr --extra-version=2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
     libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libavresample   3.  7.  0 /  3.  7.  0
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
     libpostproc    54.  7.100 / 54.  7.100
    Trailing options were found on the commandline.
    Input #0, image2, from 'img%04d.jpg':
     Duration: 00:00:00.40, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1200x900 [SAR 150:150 DAR 4:3], 25 fps, 25 tbr, 25 tbn, 25 tbc
  • Black video when using ffplay of received data over upd [closed]

    4 février 2023, par Alexandru-Marian Buza

    I'm encoding frames from a id3d11texture2d, and the receive packet size is around 70-80.
When i'm sending over udp, and use ffplay to show the video, i get a black screen.
Resolution and fps are well determined by ffplay.

    


    AcquiredBuffer.Attach(Buffer.MetaData.pSurface);

            ID3D11Texture2D* texture;
            ID3D11Texture2D* stagingTexture;
            AcquiredBuffer->QueryInterface(IID_PPV_ARGS(&texture));
            
            texture->GetDesc(&desc);
            desc.Usage = D3D11_USAGE_STAGING;
            desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
            desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
            desc.BindFlags = 0;
            desc.MiscFlags = 0;
            hr = m_Device->Device->CreateTexture2D(&desc, NULL, &stagingTexture);
            if (stagingTexture != 0) {
                m_Device->DeviceContext->CopyResource(stagingTexture, texture);
                D3D11_MAPPED_SUBRESOURCE mappedData;
                m_Device->DeviceContext->Map(stagingTexture, 0, D3D11_MAP_READ, 0, &mappedData);
                
                memcpy(frame->data[0], mappedData.pData, mappedData.DepthPitch);
                auto ret = avcodec_send_frame(codec, frame);
                if (ret < 0) {
                    writeText("Send frame failed");
                }
                else {
                    while (ret >= 0) {
                        ret = avcodec_receive_packet(codec, packet);
                        for (int i = 0; i < packet->size; i++) {
                            if (packet->data[i] != 0) {
                            }
                        }
                        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                            break;
                        }
                        else if (ret < 0) {
                            writeText("Error durring encoding");
                        }
                        if (sendto(s, (char*)packet->data, packet->size * sizeof(uint8_t), 0, (sockaddr*)&dest, sizeof(dest)) == SOCKET_ERROR) {
                            writeText("Failed to send frame");
                        }
                        av_packet_unref(packet);
                    }
                }
                
                m_Device->DeviceContext->Unmap(stagingTexture, 0);
                stagingTexture->Release();


    


  • python3 openCV VideoCapture only black image (Raspbian Stretch)

    21 juin 2018, par PrimuS

    I have a little python function that analyzes a video, chunks it into one image per second and gives me the most dominant color for that image. (Code can be found here : https://github.com/primus852/python-movie-barcode)

    This works great on my Windows testing environment. However, on my Rasbian Stretch Raspberry Pi Setup it only produces a black image, as the source seems to be black.

    I compiled OpenCV (3.4.1) myself with this great article : https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your-raspberry-pi/, and it worked perfectly fine. I am using python3 and a virtualenv.

    I tried adding the ffmpeg package : apt install ffmpeg, to no avail.

    2 ideas

    • I compiled the openCV source without support for mkv/mp4/similar ? If so, how would I check that and can I just "re-compile" the whole package ?
    • I am missing codecs, where would I be able to check that ?

    The crucial code (I think) is this :

    cap = cv2.VideoCapture(full_path) are there other options that do not break the majority of my code ? I read about skvideo.io from scikit-video but that does not seem to work with my code...

    I am new to python, any hint is appreciated

    // EDIT, I think it is not a duplicate because I pass the I get no error that the capture cannot be opened and :

    OpenCV FFMPEG support :

    • python -c "import cv2; print(cv2.getBuildInformation())" | grep -i ffmpeg
      • returns : FFMPEG: YES

    FFMPEG Codec :

    • ffmpeg -codecs | grep -i avc (file is using AVC)
      • returns : DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_mmal h264_vdpau ) (encoders: libx264 libx264rgb h264_omx h264_vaapi )

    File PATH

    It exists and the path is correct...

    Any other ideas ? Could it possibly the virtualenv ?

    //EDIT2

    Tried with an AVI file

    works...