Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (60)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (6013)

  • Python aiortc : How to record audio and video come from client in the same file ? [closed]

    22 décembre 2024, par Chris P

    I have an aiortc app, which html5 client send data (microphone,camera) into server.

    


    In server side i sucessfully played this two streams seperatly.

    


    But when i try to record using aiortc MediaRecorder helper class, only the voice is recording, and the video is dropped (mp4 format).

    


    I think this is due sync issue.

    


    The audio_frame and the video_frame (each pair of them) have different time_base.
(I don't know if this is strange).
But it also has different time.

    


    I can share code, but couldn't help at all right now.

    


    Edit : I also tried to synchronize them client side with no luck

    


    // Synchronize tracks
async function synchronizeTracks(localStream) {
    const videoTrack = localStream.getVideoTracks()[0];
    const audioTrack = localStream.getAudioTracks()[0];

    const syncedStream = new MediaStream();

    // Add tracks to the synchronized stream
    syncedStream.addTrack(videoTrack);
    syncedStream.addTrack(audioTrack);

    // Video and audio processors
    const videoProcessor = new MediaStreamTrackProcessor({ track: videoTrack });
    const audioProcessor = new MediaStreamTrackProcessor({ track: audioTrack });

    const videoReader = videoProcessor.readable.getReader();
    const audioReader = audioProcessor.readable.getReader();

    const videoWriter = new MediaStreamTrackGenerator({ kind: "video" }).writable.getWriter();
    const audioWriter = new MediaStreamTrackGenerator({ kind: "audio" }).writable.getWriter();

    const syncThreshold = 5; // Maximum allowable time difference in milliseconds
    let baseTimestamp = null;

    async function processTracks() {
        try {
            while (true) {
                const [videoResult, audioResult] = await Promise.all([
                    videoReader.read(),
                    audioReader.read(),
                ]);

                if (videoResult.done || audioResult.done) break;

                const videoFrame = videoResult.value;
                const audioFrame = audioResult.value;

                // Initialize base timestamp if needed
                if (baseTimestamp === null) {
                    baseTimestamp = Math.min(videoFrame.timestamp, audioFrame.timestamp);
                }

                const videoRelativeTimestamp = videoFrame.timestamp - baseTimestamp;
                const audioRelativeTimestamp = audioFrame.timestamp - baseTimestamp;

                const timeDifference = videoRelativeTimestamp - audioRelativeTimestamp;

                if (Math.abs(timeDifference) <= syncThreshold) {
                    // Frames are in sync
                    await videoWriter.write(videoFrame);
                    await audioWriter.write(audioFrame);
                } else if (timeDifference > 0) {
                    // Video is ahead, wait for audio to catch up
                    await audioWriter.write(audioFrame);
                    // Reuse video frame on the next loop
                    videoReader.releaseLock();
                } else {
                    // Audio is ahead, wait for video to catch up
                    await videoWriter.write(videoFrame);
                    // Reuse audio frame on the next loop
                    audioReader.releaseLock();
                }

                // Release frames
                videoFrame.close();
                audioFrame.close();
            }
        } catch (error) {
            console.error("Error in track synchronization:", error);
        } finally {
            videoReader.releaseLock();
            audioReader.releaseLock();
            videoWriter.close();
            audioWriter.close();
        }
    }

    processTracks();

    return syncedStream;
}



    


    python code to improve :

    


    class SyncClientTracksForRecording:
    def __init__(self, audio_track, video_track, audio_track_sync_q, video_track_sync_q):
        self.audio_track = audio_track
        self.video_track = video_track
        self.audio_track_sync_q = audio_track_sync_q
        self.video_track_sync_q = video_track_sync_q

        # Time bases
        self.audio_time_base = fractions.Fraction(1, 48000)  # 48 kHz audio
        self.video_time_base = fractions.Fraction(1, 90000)  # 90 kHz video

        # Elapsed time tracking
        self.audio_elapsed_time = 0.0
        self.video_elapsed_time = 0.0

        # Stop signal for synchronization loop
        self.stop_signal = False

    async def sync(self):
        while not self.stop_signal:
            try:
                # Receive audio and video frames concurrently
                audio_task = asyncio.create_task(self.audio_track.recv())
                video_task = asyncio.create_task(self.video_track.recv())

                audio_frame, video_frame = await asyncio.gather(audio_task, video_task)

                # Set time bases
                audio_frame.time_base = self.audio_time_base
                video_frame.time_base = self.video_time_base

                # Calculate and assign PTS values
                audio_frame.pts = int(self.audio_elapsed_time / float(self.audio_time_base))
                video_frame.pts = int(self.video_elapsed_time / float(self.video_time_base))

                # Increment elapsed time
                self.audio_elapsed_time += 0.020  # Assuming 20 ms audio frame duration
                self.video_elapsed_time += 1 / 30  # Assuming 30 fps video frame rate

                # Enqueue frames
                await asyncio.gather(
                    self.audio_track_sync_q.put(audio_frame),
                    self.video_track_sync_q.put(video_frame),
                )

            except Exception as e:
                print(f"Error in sync loop: {e}")
                break

    def stop(self):
        """Stop the synchronization loop."""
        self.stop_signal = True



    


  • Revision 8d363882fd : Choosing GOOD mode by default. This patch fixes slow first pass problem. Mode c

    23 août 2014, par Dmitry Kovalev

    Changed Paths :
     Modify /vp9/vp9_cx_iface.c



    Choosing GOOD mode by default.

    This patch fixes slow first pass problem. Mode could only be determined
    from the deadline value during frame encode call. Unfortunately, we use
    mode value before any encode calls during the first pass encoding (see
    set_speed_features() logic). The mode for the first pass must be different
    from BEST to make first pass fast.

    Change-Id : I562a7d32004ff631695d91c09a44d8a9076fd6b5

  • Revision d9b62160a0 : Implements several heuristics to prune mode search Skips mode searches for intr

    3 juillet 2013, par Deb Mukherjee

    Changed Paths :
     Modify /vp9/encoder/vp9_encodeframe.c


     Modify /vp9/encoder/vp9_onyx_if.c


     Modify /vp9/encoder/vp9_onyx_int.h


     Modify /vp9/encoder/vp9_rdopt.c



    Implements several heuristics to prune mode search

    Skips mode searches for intra and compound inter modes depending
    on the best mode so far and the reference frames. The various
    heuristics to be used are selected by bits from a flag. The
    previous direction based intra mode search pruning is also absorbed
    in this framework.

    Specifically the flags and their impact are :

    1) FLAG_SKIP_INTRA_BESTINTER (skip intra mode search for oblique
    directional modes and TM_PRED if the best so far is
    an inter mode)
    derfraw300 : -0.15%, 10% speedup

    2) FLAG_SKIP_INTRA_DIRMISMATCH (skip D27, D63, D117 and D153
    mode search if the best so far is not one of the closest
    hor/vert/diagonal directions.
    derfraw300 : -0.05%, about 9% speedup

    3) FLAG_SKIP_COMP_BESTINTRA (skip compound prediction mode
    search if the best so far is an intra mode)
    derfraw300 : -0.06%, about 7-8% speedup

    4) FLAG_SKIP_COMP_REFMISMATCH (skip compound prediction search
    if the best single ref inter mode does not have the same ref
    as one of the two references being tested in the compound mode)
    derfraw300 : -0.56%, about 10% speedup

    Change-Id : I1a736cd29b36325489e7af9f32698d6394b2c495