Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (73)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (11095)

  • convert WAV to TETRA format

    14 juin 2020, par Ashish Arora

    I am trying to convert a wav file into TETRA encoded file (https://en.wikipedia.org/wiki/Terrestrial_Trunked_Radio). Tetra is used by fire-fighters, it provides a radio-like voice.

    



    I am trying to use the official tetra codec codes available at (https://www.etsi.org/deliver/etsi_en/300300_300399/30039502/01.03.01_60/) and we can easily compile it using the scripts available at https://github.com/sq5bpf/install-tetra-codec.

    



    However, I am not able to figure out how to convert a wav file into tetra codec files using these files. I tried going through the documentation of the compiled files (ccoder, cdecoder, scoder, sdecoder).

    



    I tried the following command -

    



    


    tetra/bin/scoder input.wav serial_file synth_file

    


    



    here serial_file and synth_file are the output files and have following documentation in the scoder.c file :

    



        INPUT   :   - Description : speech file to be analyzed
                - Format : binary file 16 bit-samples
                  240 samples per frame

serial_file :   - Description : serial stream output file 
            - Format : binary file 16 bit-samples
              each 16 bit-sample represents one encoded bit
              138 (= 1 + 137) bits per frame

synth_file  :   - Description : local synthesis output file 
            - Format : binary file 16 bit-samples


    



    For an input file of size 13M, I obtained serial_file and synth_file of size 8.0M and 16M. However, I thought since the wav file is getting converted into a walkie-talkie type signal the output file size will be alot smaller.

    



    I want to clarify if :

    



      

    • I used the correct code to convert an input wav file into a tetra format output file.
    • 


    • can you please help me understand, what is serial_file and synth_file.
    • 


    



    Thanks,
Ashish

    


  • ffplay cannot play more than one song

    5 février 2020, par Bernie gach

    i have taken ffplay.c file from http://ffmpeg.org/doxygen/trunk/ffplay_8c-source.html and re edited it to a cpp file to embed in my win32 gui application . i have made the following changes to it.

    1. made the int main function into a local function as follows, i can pass the HWND to embedd the player
    void Ffplay::play_song(string file, HWND parent, bool* successfull)
    {
       int flags;
       VideoState* is;
       input_filename = file;
       /* register all codecs, demux and protocols */
    #if CONFIG_AVDEVICE
       avdevice_register_all();
    #endif
       //avformat_network_init();
       //check whether the filename is valid
       if (input_filename.empty())
       {
           logger.log(logger.LEVEL_ERROR, "filename %s is not valid\n", file);
           return;
       }
       if (display_disable)
       {
           video_disable = 1;
       }
       flags = SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER;
       if (audio_disable)
           flags &= ~SDL_INIT_AUDIO;
       else
       {
           /* Try to work around an occasional ALSA buffer underflow issue when the
            * period size is NPOT due to ALSA resampling by forcing the buffer size. */
           if (!SDL_getenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE"))
               SDL_setenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE", "1", 1);
       }
       if (display_disable)
           flags &= ~SDL_INIT_VIDEO;
       SDL_SetMainReady();
       if (SDL_Init(flags))
       {
           logger.log(logger.LEVEL_ERROR, "Could not initialize SDL - %s\n", SDL_GetError());
           logger.log(logger.LEVEL_ERROR, "(Did you set the DISPLAY variable?)\n");
           return;
       }
       //Initialize optional fields of a packet with default values.
       //Note, this does not touch the data and size members, which have to be initialized separately.
       av_init_packet(&flush_pkt);
       flush_pkt.data = (uint8_t*)&flush_pkt;

       if (!display_disable)
       {
           int flags = SDL_WINDOW_HIDDEN;
           if (alwaysontop)
    #if SDL_VERSION_ATLEAST(2,0,5)
               flags |= SDL_WINDOW_ALWAYS_ON_TOP;
    #else
               logger.log(logger.LEVEL_INFO, "SDL version doesn't support SDL_WINDOW_ALWAYS_ON_TOP. Feature will be inactive.\n");
    #endif
           if (borderless)
               flags |= SDL_WINDOW_BORDERLESS;
           else
               flags |= SDL_WINDOW_RESIZABLE;
           SDL_InitSubSystem(flags);
           ShowWindow(parent, true);
           //window = SDL_CreateWindow(program_name, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, default_width, default_height, flags);
           window = SDL_CreateWindowFrom(parent);
           SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear");
           if (window) {
               renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
               if (!renderer)
               {
                   logger.log(logger.LEVEL_ERROR, "Failed to initialize a hardware accelerated renderer: %s\n", SDL_GetError());
                   renderer = SDL_CreateRenderer(window, -1, 0);
               }
               if (renderer)
               {
                   if (!SDL_GetRendererInfo(renderer, &renderer_info))
                   {
                       logger.log(logger.LEVEL_INFO, "Initialized %s renderer.\n", renderer_info.name);
                   }
               }
           }
           if (!window || !renderer || !renderer_info.num_texture_formats)
           {
               logger.log(logger.LEVEL_ERROR, "Failed to create window or renderer: %s\n", SDL_GetError());
               return;
           }
       }

       is = stream_open(input_filename.c_str(), file_iformat);
       if (!is)
       {
           logger.log(logger.LEVEL_ERROR, "Failed to initialize VideoState!\n");
           return;
       }
       //the song is playing now
       *successfull = true;
       event_loop(is);
       //the song has quit;
       *successfull = false;
    }
    1. changed the callback functions as the static ones couldn’t be used by c++ eg,
    void Ffplay::static_sdl_audio_callback(void* opaque, Uint8* stream, int len)
    {
       static_cast(opaque)->sdl_audio_callback(opaque, stream, len);
    }

    closing doesn’t change from the main file to close the audio and sdl framework

    void Ffplay::do_exit(VideoState* is)
    {
       abort = true;
       if(is)
       {
           stream_close(is);
       }
       if (renderer)
           SDL_DestroyRenderer(renderer);
       if (window)
            SDL_DestroyWindow(window);
    #if CONFIG_AVFILTER
       av_freep(&vfilters_list);
    #endif
       avformat_network_deinit();
       SDL_Quit();

    }

    i call the functions as follows from main gui

    ft=std::async(launch::async, &Menu::play_song, this, songs_to_play.at(0));

    the menu::play_song function is :

    void Menu::play_song(wstring song_path)
    {
       ready_to_play_song = false;
       OutputDebugString(L"\nbefore song\n");
       using std::future;
       using std::async;
       using std::launch;

       string input{ song_path.begin(),song_path.end() };
       Ffplay ffplay;
       ffplay.play_song(input, h_sdl_window, &song_opened);

       OutputDebugString(L"\nafter song\n");
       ready_to_play_song = true;
    }

    THE PROBLEM is i can only play one song . if i call the menu::play_song function again the sound is missing and the video/art cover is occasionally missing also. it seems some resources are not been released or something like that.

    i have localised the proble to this function

    int Ffplay::packet_queue_get(PacketQueue* q, AVPacket* pkt, int block, int* serial)
    {

       MyAVPacketList* pkt1;
       int ret;
       int count=0;
       SDL_LockMutex(q->mutex);

       for (;;)
       {


           if (q->abort_request)
           {
               ret = -1;
               break;
           }

           pkt1 = q->first_pkt;
           if (pkt1) {
               q->first_pkt = pkt1->next;
               if (!q->first_pkt)
                   q->last_pkt = NULL;
               q->nb_packets--;
               q->size -= pkt1->pkt.size + sizeof(*pkt1);
               q->duration -= pkt1->pkt.duration;
               *pkt = pkt1->pkt;
               if (serial)
                   *serial = pkt1->serial;
               av_free(pkt1);
               ret = 1;
               break;
           }
           else if (!block) {
               ret = 0;
               break;
           }
           else
           {
               logger.log(logger.LEVEL_INFO, "packet_queue before");
               SDL_CondWait(q->cond, q->mutex);
               logger.log(logger.LEVEL_INFO, "packet_queue after");

           }
       }
       SDL_UnlockMutex(q->mutex);
       return ret;
    }

    the call to SDL_CondWait(q->cond, q->mutex); never returns

  • how to make cloud services for camera device iot monitoring and control

    16 novembre 2019, par guardian presence

    Looking to use ffmpeg to pull video trsp stream over tcp pass the video feeds to opencv for object recognition and tracking passing the output to ardunio serial control systems like alarm systems drones ardunio controlled lawn mores.

    Lets say some one has a cctv system and a drone laying around if they connect to my cloud the system can pull the stream from the cctv cameras pull the stream from the drone if object moves in on camera zone the cloud can control the drone to move to that zone and take a closer look at a face if face detected send alert return drone to docking . if ptz camera opencv tracking zooming to face

    if ros can turn into a cloud service need to know about hosting and
    building front end user login and device login.

    i’m a cctv installer new to programming and want to setup cloud login for camera iot devises were users can control and monitoring of devices from a central cloud