Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (33)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (4065)

  • ffplay cannot play more than one song

    5 février 2020, par Bernie gach

    i have taken ffplay.c file from http://ffmpeg.org/doxygen/trunk/ffplay_8c-source.html and re edited it to a cpp file to embed in my win32 gui application . i have made the following changes to it.

    1. made the int main function into a local function as follows, i can pass the HWND to embedd the player
    void Ffplay::play_song(string file, HWND parent, bool* successfull)
    {
       int flags;
       VideoState* is;
       input_filename = file;
       /* register all codecs, demux and protocols */
    #if CONFIG_AVDEVICE
       avdevice_register_all();
    #endif
       //avformat_network_init();
       //check whether the filename is valid
       if (input_filename.empty())
       {
           logger.log(logger.LEVEL_ERROR, "filename %s is not valid\n", file);
           return;
       }
       if (display_disable)
       {
           video_disable = 1;
       }
       flags = SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER;
       if (audio_disable)
           flags &= ~SDL_INIT_AUDIO;
       else
       {
           /* Try to work around an occasional ALSA buffer underflow issue when the
            * period size is NPOT due to ALSA resampling by forcing the buffer size. */
           if (!SDL_getenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE"))
               SDL_setenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE", "1", 1);
       }
       if (display_disable)
           flags &= ~SDL_INIT_VIDEO;
       SDL_SetMainReady();
       if (SDL_Init(flags))
       {
           logger.log(logger.LEVEL_ERROR, "Could not initialize SDL - %s\n", SDL_GetError());
           logger.log(logger.LEVEL_ERROR, "(Did you set the DISPLAY variable?)\n");
           return;
       }
       //Initialize optional fields of a packet with default values.
       //Note, this does not touch the data and size members, which have to be initialized separately.
       av_init_packet(&flush_pkt);
       flush_pkt.data = (uint8_t*)&flush_pkt;

       if (!display_disable)
       {
           int flags = SDL_WINDOW_HIDDEN;
           if (alwaysontop)
    #if SDL_VERSION_ATLEAST(2,0,5)
               flags |= SDL_WINDOW_ALWAYS_ON_TOP;
    #else
               logger.log(logger.LEVEL_INFO, "SDL version doesn't support SDL_WINDOW_ALWAYS_ON_TOP. Feature will be inactive.\n");
    #endif
           if (borderless)
               flags |= SDL_WINDOW_BORDERLESS;
           else
               flags |= SDL_WINDOW_RESIZABLE;
           SDL_InitSubSystem(flags);
           ShowWindow(parent, true);
           //window = SDL_CreateWindow(program_name, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, default_width, default_height, flags);
           window = SDL_CreateWindowFrom(parent);
           SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear");
           if (window) {
               renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
               if (!renderer)
               {
                   logger.log(logger.LEVEL_ERROR, "Failed to initialize a hardware accelerated renderer: %s\n", SDL_GetError());
                   renderer = SDL_CreateRenderer(window, -1, 0);
               }
               if (renderer)
               {
                   if (!SDL_GetRendererInfo(renderer, &renderer_info))
                   {
                       logger.log(logger.LEVEL_INFO, "Initialized %s renderer.\n", renderer_info.name);
                   }
               }
           }
           if (!window || !renderer || !renderer_info.num_texture_formats)
           {
               logger.log(logger.LEVEL_ERROR, "Failed to create window or renderer: %s\n", SDL_GetError());
               return;
           }
       }

       is = stream_open(input_filename.c_str(), file_iformat);
       if (!is)
       {
           logger.log(logger.LEVEL_ERROR, "Failed to initialize VideoState!\n");
           return;
       }
       //the song is playing now
       *successfull = true;
       event_loop(is);
       //the song has quit;
       *successfull = false;
    }
    1. changed the callback functions as the static ones couldn’t be used by c++ eg,
    void Ffplay::static_sdl_audio_callback(void* opaque, Uint8* stream, int len)
    {
       static_cast(opaque)->sdl_audio_callback(opaque, stream, len);
    }

    closing doesn’t change from the main file to close the audio and sdl framework

    void Ffplay::do_exit(VideoState* is)
    {
       abort = true;
       if(is)
       {
           stream_close(is);
       }
       if (renderer)
           SDL_DestroyRenderer(renderer);
       if (window)
            SDL_DestroyWindow(window);
    #if CONFIG_AVFILTER
       av_freep(&vfilters_list);
    #endif
       avformat_network_deinit();
       SDL_Quit();

    }

    i call the functions as follows from main gui

    ft=std::async(launch::async, &Menu::play_song, this, songs_to_play.at(0));

    the menu::play_song function is :

    void Menu::play_song(wstring song_path)
    {
       ready_to_play_song = false;
       OutputDebugString(L"\nbefore song\n");
       using std::future;
       using std::async;
       using std::launch;

       string input{ song_path.begin(),song_path.end() };
       Ffplay ffplay;
       ffplay.play_song(input, h_sdl_window, &song_opened);

       OutputDebugString(L"\nafter song\n");
       ready_to_play_song = true;
    }

    THE PROBLEM is i can only play one song . if i call the menu::play_song function again the sound is missing and the video/art cover is occasionally missing also. it seems some resources are not been released or something like that.

    i have localised the proble to this function

    int Ffplay::packet_queue_get(PacketQueue* q, AVPacket* pkt, int block, int* serial)
    {

       MyAVPacketList* pkt1;
       int ret;
       int count=0;
       SDL_LockMutex(q->mutex);

       for (;;)
       {


           if (q->abort_request)
           {
               ret = -1;
               break;
           }

           pkt1 = q->first_pkt;
           if (pkt1) {
               q->first_pkt = pkt1->next;
               if (!q->first_pkt)
                   q->last_pkt = NULL;
               q->nb_packets--;
               q->size -= pkt1->pkt.size + sizeof(*pkt1);
               q->duration -= pkt1->pkt.duration;
               *pkt = pkt1->pkt;
               if (serial)
                   *serial = pkt1->serial;
               av_free(pkt1);
               ret = 1;
               break;
           }
           else if (!block) {
               ret = 0;
               break;
           }
           else
           {
               logger.log(logger.LEVEL_INFO, "packet_queue before");
               SDL_CondWait(q->cond, q->mutex);
               logger.log(logger.LEVEL_INFO, "packet_queue after");

           }
       }
       SDL_UnlockMutex(q->mutex);
       return ret;
    }

    the call to SDL_CondWait(q->cond, q->mutex); never returns

  • how to make cloud services for camera device iot monitoring and control

    16 novembre 2019, par guardian presence

    Looking to use ffmpeg to pull video trsp stream over tcp pass the video feeds to opencv for object recognition and tracking passing the output to ardunio serial control systems like alarm systems drones ardunio controlled lawn mores.

    Lets say some one has a cctv system and a drone laying around if they connect to my cloud the system can pull the stream from the cctv cameras pull the stream from the drone if object moves in on camera zone the cloud can control the drone to move to that zone and take a closer look at a face if face detected send alert return drone to docking . if ptz camera opencv tracking zooming to face

    if ros can turn into a cloud service need to know about hosting and
    building front end user login and device login.

    i’m a cctv installer new to programming and want to setup cloud login for camera iot devises were users can control and monitoring of devices from a central cloud

  • FFMpeg access AVFoundation usb subdevice camera on OSX Mojave

    20 août 2020, par Retiarius

    I Have a dual USB camera for VR : two cameras, one usb connection. On linux, this appears in /dev/video0 and /dev/video1 and I can capture using ffmpeg -i /dev/video0

    



    On Mojave, I can see both devices in the USB hub :

    



    USB 2.0 Hub:

Product ID: 0x0101
Vendor ID:  0x1a40  (TERMINUS TECHNOLOGY INC.)
Version:    1.11
Speed:  Up to 480Mb/sec
Location ID:    0x14200000 / 8
Current Available (mA): 500
Current Required (mA):  100
Extra Operating Current (mA):   0

    Stereo Vision 2:

    Product ID: 0x9901
    Vendor ID:  0x0ac8  (Z-Star Microelectronics Corporation)
    Version:    27.02
    Serial Number:  SN0099
    Speed:  Up to 480Mb/sec
    Manufacturer:   SHENZHEN RERVISION TECHNOLOGY
    Location ID:    0x14220000 / 10
    Current Available (mA): 500
    Current Required (mA):  500
    Extra Operating Current (mA):   0

    Stereo Vision 2:

    Product ID: 0x9902
    Vendor ID:  0x0ac8  (Z-Star Microelectronics Corporation)
    Version:    27.02
    Serial Number:  SN0100
    Speed:  Up to 480Mb/sec
    Manufacturer:   SHENZHEN RERVISION TECHNOLOGY
    Location ID:    0x14210000 / 9
    Current Available (mA): 500
    Current Required (mA):  500
    Extra Operating Current (mA):   0


    



    But when I list devices, I can see only one [0] :

    



    ffmpeg -f avfoundation -list_devices true -i ""
    [AVFoundation input device @ 0x7fae5b501a80] AVFoundation video devices:
    [AVFoundation input device @ 0x7fae5b501a80] [0] Stereo Vision 2
    [AVFoundation input device @ 0x7fae5b501a80] [1] FaceTime HD Camera
    [AVFoundation input device @ 0x7fae5b501a80] [2] Capture screen 0


    



    capturing from this device captures from one of the cameras.

    



    How can I get ffmpeg to detect the second usb device as well ?