Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (104)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (6764)

  • Using PyAV to encode mono audio to file, params match docs, but still causes Errno 22

    20 février 2023, par andrew8088

    While trying to use PyAV to encode live mono audio from a microphone to a compressed audio stream (using mp2 or flac as encoder), the program kept raising an exception ValueError: [Errno 22] Invalid argument.

    


    To remove the live microphone source as a cause of the problem, and to make the problematic code easier for others to run/test, I have removed the mic source and now just generate a pure tone as a sequence of input buffers.

    


    All attempts to figure out the missing or mismatched or incorrect argument have just resulted in seeing documentation and examples that are the same as my code.

    


    I would like to know from someone who has used PyAV successfully for mono audio what the correct method and parameters are for encoding mono frames into the mono stream.

    


    The package used is av 10.0.0 installed with
pip3 install av --no-binary av
so it uses my package-manager provided ffmpeg library, which is version 4.2.7.

    


    The problematic python code is :

    


    #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Recreating an error 22 when encoding sound with PyAV.

Created on Sun Feb 19 08:10:29 2023
@author: andrewm
"""
import typing
import sys
import math
import fractions

import av
from av import AudioFrame

""" Ensure some PyAudio constants are still defined without changing 
    the PyAudio recording callback function and without depending 
    on PyAudio simply for reproducing the PyAV bug [Errno 22] thrown in 
    File "av/filter/context.pyx", line 89, in av.filter.context.FilterContext.push
"""
class PA_Stub():
    paContinue = True
    paComplete= False

pyaudio = PA_Stub()


"""Generate pure tone at given frequency with amplitude 0...1.0 at 
   sampling frewuency fs and beginning at phase offset 'phase'.
   Returns the new phase after the sinusoid has cycled over the 
   sampling window length.
"""
def generate_tone(
        freq:int, phase:float, amp:float, fs, samp_fmt, buffer:bytearray
) -> float:
    assert samp_fmt == "s16", "Only s16 supported atm"
    samp_size_bytes = 2
    n_samples = int(len(buffer)/samp_size_bytes)
    window = [int(0) for i in range(n_samples)]
    theta = phase
    phase_inc = 2*math.pi * freq / fs
    for i in range(n_samples):
        v = amp * math.sin(theta)
        theta += phase_inc
        s = int((2**15-1)*v)
        window[i] = s
    for sample_i in range(len(window)):
        byte_i = sample_i * samp_size_bytes
        enc = window[sample_i].to_bytes(
                2, byteorder=sys.byteorder, signed=True
        )
        buffer[byte_i] = enc[0]
        buffer[byte_i+1] = enc[1]
    return theta


channels = 1
fs = 44100  # Record at 44100 samples per second
fft_size_samps = 256
chunk_samps = fft_size_samps * 10  # Record in chunks that are multiples of fft windows.

# print(f"fft_size_samps={fft_size_samps}\nchunk_samps={chunk_samps}")

seconds = 3.0
out_filename = "testoutput.wav"

# Store data in chunks for 3 seconds
sample_limit = int(fs * seconds)
sample_len = 0
frames = []  # Initialize array to store frames

ffmpeg_codec_name = 'mp2'  # flac, mp3, or libvorbis make same error.

sample_size_bytes = 2
buffer = bytearray(int(chunk_samps*sample_size_bytes))
chunkperiod = chunk_samps / fs
total_chunks = int(math.ceil(seconds / chunkperiod))
phase = 0.0

### uncomment if you want to see the synthetic data being used as a mic input.
# with open("test.raw","wb") as raw_out:
#     for ci in range(total_chunks):
#         phase = generate_tone(2600, phase, 0.8, fs, "s16", buffer)
#         raw_out.write(buffer)
# print("finished gen test")
# sys.exit(0)
# #---- 

# Using mp2 or mkv as the container format gets the same error.
with av.open(out_filename+'.mp2', "w", format="mp2") as output_con:
    output_con.metadata["title"] = "My title"
    output_con.metadata["key"] = "value"
    channel_layout = "mono"
    sample_fmt = "s16p"

    ostream = output_con.add_stream(ffmpeg_codec_name, fs, layout=channel_layout)
    assert ostream is not None, "No stream!"
    cctx = ostream.codec_context
    cctx.sample_rate = fs
    cctx.time_base = fractions.Fraction(numerator=1,denominator=fs)
    cctx.format = sample_fmt
    cctx.channels = channels
    cctx.layout = channel_layout
    print(cctx, f"layout#{cctx.channel_layout}")
    
    # Define PyAudio-style callback for recording plus PyAV transcoding.
    def rec_callback(in_data, frame_count, time_info, status):
        global sample_len
        global ostream
        frames.append(in_data)
        nsamples = int(len(in_data) / (channels*sample_size_bytes))
        
        frame = AudioFrame(format=sample_fmt, layout=channel_layout, samples=nsamples)
        frame.sample_rate = fs
        frame.time_base = fractions.Fraction(numerator=1,denominator=fs)
        frame.pts = sample_len
        frame.planes[0].update(in_data)
        print(frame, len(in_data))
        
        for out_packet in ostream.encode(frame):
            output_con.mux(out_packet)
        for out_packet in ostream.encode(None):
            output_con.mux(out_packet)
        
        sample_len += nsamples
        retflag = pyaudio.paContinue if sample_lencode>

    


    If you uncomment the RAW output part you will find the generated data can be imported as PCM s16 Mono 44100Hz into Audacity and plays the expected tone, so the generated audio data does not seem to be the problem.

    


    The normal program console output up until the exception is :

    


    mp2 at 0x7f8e38202cf0> layout#4
Beginning
 5120
. 5120


    


    The stack trace is :

    


    Traceback (most recent call last):&#xA;&#xA;  File "Dev/multichan_recording/av_encode.py", line 147, in <module>&#xA;    ret_data, ret_flag = rec_callback(buffer, ci, {}, 1)&#xA;&#xA;  File "Dev/multichan_recording/av_encode.py", line 121, in rec_callback&#xA;    for out_packet in ostream.encode(frame):&#xA;&#xA;  File "av/stream.pyx", line 153, in av.stream.Stream.encode&#xA;&#xA;  File "av/codec/context.pyx", line 484, in av.codec.context.CodecContext.encode&#xA;&#xA;  File "av/audio/codeccontext.pyx", line 42, in av.audio.codeccontext.AudioCodecContext._prepare_frames_for_encode&#xA;&#xA;  File "av/audio/resampler.pyx", line 101, in av.audio.resampler.AudioResampler.resample&#xA;&#xA;  File "av/filter/graph.pyx", line 211, in av.filter.graph.Graph.push&#xA;&#xA;  File "av/filter/context.pyx", line 89, in av.filter.context.FilterContext.push&#xA;&#xA;  File "av/error.pyx", line 336, in av.error.err_check&#xA;&#xA;ValueError: [Errno 22] Invalid argument&#xA;&#xA;</module>

    &#xA;

    edit : It's interesting that the error happens on the 2nd AudioFrame, as apparently the first one was encoded okay, because they are given the same attribute values aside from the Presentation Time Stamp (pts), but leaving this out and letting PyAV/ffmpeg generate the PTS by itself does not fix the error, so an incorrect PTS does not seem the cause.

    &#xA;

    After a brief glance in av/filter/context.pyx the exception must come from a bad return value from res = lib.av_buffersrc_write_frame(self.ptr, frame.ptr)
    &#xA;Trying to dig into av_buffersrc_write_frame from the ffmpeg source it is not clear what could be causing this error. The only obvious one is a mismatch between channel layouts, but my code is setting the layout the same in the Stream and the Frame. That problem had been found by an old question pyav - cannot save stream as mono and their answer (that one parameter required is undocumented) is the only reason the code now has the layout='mono' argument when making the stream.

    &#xA;

    The program output shows layout #4 is being used, and from https://github.com/FFmpeg/FFmpeg/blob/release/4.2/libavutil/channel_layout.h you can see this is the value for symbol AV_CH_FRONT_CENTER which is the only channel in the MONO layout.

    &#xA;

    The mismatch is surely some other object property or an undocumented parameter requirement.

    &#xA;

    How do you encode mono audio to a compressed stream with PyAV ?

    &#xA;

  • Can build & make video call with pjsip and ffmpeg

    10 mai 2023, par QViet

    I try to build PJSIP with ffmpeg with this config :

    &#xA;

    i Follow those step :

    &#xA;

      &#xA;
    • build need lib and place in thirdparty folder name ffmpeg
    • &#xA;

    • setup link lib & header already.
    • &#xA;

    • run build with "$configure —with-ffmpeg="
    • &#xA;

    • config_site.h add :
    • &#xA;

    &#xA;

    &#xA;
    #define PJMEDIA_HAS_OPENH264_CODEC 1 &#xA;#define PJMEDIA_HAS_VIDEO 1 &#xA;#define PJMEDIA_VIDEO_DEV_HAS_FFMPEG 1 &#xA;#define PJMEDIA_HAS_FFMPEG_VID_CODEC 1 &#xA;#define PJMEDIA_HAS_FFMPEG 1 &#xA;#define PJMEDIA_HAS_FFMPEG_CODEC_H264 1 &#xA;#define PJMEDIA_HAS_LIBAVDEVICE 1 &#xA;#define PJMEDIA_HAS_OPENH264_CODEC 1&#xA;

    &#xA;

    &#xA;

    I see have to enable PJMEDIA_HAS_OPENH264_CODEC , if not, can build success but when import will receive this error :

    &#xA;

    Undefined symbol: _WelsCreateDecoder&#xA;

    &#xA;

    cause miss wels package exits in openh264 lib.

    &#xA;

    The build with success after all with above config but in this :

    &#xA;

       pj_status_t status = pjsua_vid_enum_codecs(videoCodecInfo, &amp;videoCodecCount);&#xA;

    &#xA;

    the codec info just show 1 codec is "H264/97" -> is OpenH264 codec, i can't see ffmpeg here.&#xA;When im start call like normal, see log openh264 init call/ open camera .

    &#xA;

    What i need step to using ffmpeg, i can see any docs about it

    &#xA;

    can you help me ?

    &#xA;

    ** this i log call stack :**

    &#xA;

    2023-04-24 10:17:21.522976&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.523 [SIPSample -[SIPSample startEndpointWithEndpointConfiguration:error:]:272] Creating new PSUASIP Endpoint instance.&#xA;10:17:21.525         os_core_unix.c !pjlib 2.13-dev for POSIX initialized&#xA;10:17:21.526         sip_endpoint.c  .Creating endpoint instance...&#xA;10:17:21.527                  pjlib  .select() I/O Queue created (0x1050a32c8)&#xA;10:17:21.527         sip_endpoint.c  .Module "mod-msg-print" registered&#xA;10:17:21.527        sip_transport.c  .Transport manager created.&#xA;10:17:21.527           pjsua_core.c  .PJSUA state changed: NULL --> CREATED&#xA;2023-04-24 10:17:21.528077&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.528         sip_endpoint.c  .Module "mod-pjsua-log" registered&#xA;&#xA;2023-04-24 10:17:21.528204&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.528 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-PSUA-log" registered&#xA;2023-04-24 10:17:21.529375&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.529         sip_endpoint.c  .Module "mod-tsx-layer" registered&#xA;&#xA;2023-04-24 10:17:21.529477&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.529 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-tsx-layer" registered&#xA;2023-04-24 10:17:21.529491&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.529         sip_endpoint.c  .Module "mod-stateful-util" registered&#xA;&#xA;2023-04-24 10:17:21.529592&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.530 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-stateful-util" registered&#xA;2023-04-24 10:17:21.529895&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.529         sip_endpoint.c  .Module "mod-ua" registered&#xA;&#xA;2023-04-24 10:17:21.530024&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.530 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-ua" registered&#xA;2023-04-24 10:17:21.530068&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.530         sip_endpoint.c  .Module "mod-100rel" registered&#xA;&#xA;2023-04-24 10:17:21.530181&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.530         sip_endpoint.c  .Module "mod-pjsua" registered&#xA;&#xA;2023-04-24 10:17:21.530217&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.530 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-100rel" registered&#xA;2023-04-24 10:17:21.530283&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.530 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-PSUA" registered&#xA;2023-04-24 10:17:21.530865&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.530         sip_endpoint.c  .Module "mod-invite" registered&#xA;&#xA;2023-04-24 10:17:21.530970&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.531 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-invite" registered&#xA;2023-04-24 10:17:21.677206&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.677        coreaudio_dev.c  .. dev_id 0: iPhone IO device  (in=1, out=1) 8000Hz&#xA;&#xA;2023-04-24 10:17:21.677497&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.677 [SIPSample void logCallBack(int, const char *, int):1034]        coreaudio_dev.c  .. dev_id 0: iPhone IO device  (in=1, out=1) 8000Hz&#xA;2023-04-24 10:17:21.677588&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.677        coreaudio_dev.c  ..core audio initialized&#xA;&#xA;2023-04-24 10:17:21.677804&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.678 [SIPSample void logCallBack(int, const char *, int):1034]        coreaudio_dev.c  ..core audio initialized&#xA;2023-04-24 10:17:21.678538&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.678                  pjlib  ..select() I/O Queue created (0x1060684a8)&#xA;&#xA;2023-04-24 10:17:21.678743&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.679 [SIPSample void logCallBack(int, const char *, int):1034]                  pjlib  ..select() I/O Queue created (0x1060684a8)&#xA;2023-04-24 10:17:21.683380&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.683            pjsua_vid.c  ..Initializing video subsystem..&#xA;&#xA;2023-04-24 10:17:21.683585&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.683 [SIPSample void logCallBack(int, const char *, int):1034]            PSUA_vid.c  ..Initializing video subsystem..&#xA;2023-04-24 10:17:21.684058&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.684             vid_conf.c  ...Created video conference bridge with 32 ports&#xA;&#xA;2023-04-24 10:17:21.684260&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.684 [SIPSample void logCallBack(int, const char *, int):1034]             vid_conf.c  ...Created video conference bridge with 32 ports&#xA;2023-04-24 10:17:21.684983&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.684           openh264.cpp  ...OpenH264 codec initialized&#xA;&#xA;2023-04-24 10:17:21.685168&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.685 [SIPSample void logCallBack(int, const char *, int):1034]           openh264.cpp  ...OpenH264 codec initialized&#xA;2023-04-24 10:17:21.685237&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.685           opengl_dev.c  ...OpenGL device initialized&#xA;&#xA;2023-04-24 10:17:21.685370&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.685 [SIPSample void logCallBack(int, const char *, int):1034]           opengl_dev.c  ...OpenGL device initialized&#xA;2023-04-24 10:17:21.715616&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.715           darwin_dev.m  ...Darwin video initialized with 5 devices:&#xA;&#xA;2023-04-24 10:17:21.715796&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.716 [SIPSample void logCallBack(int, const char *, int):1034]           darwin_dev.m  ...Darwin video initialized with 5 devices:&#xA;2023-04-24 10:17:21.715812&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.715           darwin_dev.m  ... 0: [Renderer] iOS - UIView&#xA;&#xA;2023-04-24 10:17:21.715917&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.716 [SIPSample void logCallBack(int, const char *, int):1034]           darwin_dev.m  ... 0: [Renderer] iOS - UIView&#xA;2023-04-24 10:17:21.715921&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.715           darwin_dev.m  ... 1: [Capturer] AVF - Front Camera&#xA;&#xA;2023-04-24 10:17:21.716006&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.716 [SIPSample void logCallBack(int, const char *, int):1034]           darwin_dev.m  ... 1: [Capturer] AVF - Front Camera&#xA;2023-04-24 10:17:21.716033&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.716           darwin_dev.m  ... 2: [Capturer] AVF - Back Camera&#xA;&#xA;2023-04-24 10:17:21.716137&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.716           darwin_dev.m  ... 3: [Capturer] AVF - Back Dual Camera&#xA;&#xA;2023-04-24 10:17:21.716152&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.716 [SIPSample void logCallBack(int, const char *, int):1034]           darwin_dev.m  ... 2: [Capturer] AVF - Back Camera&#xA;2023-04-24 10:17:21.716218&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.716 [SIPSample void logCallBack(int, const char *, int):1034]           darwin_dev.m  ... 3: [Capturer] AVF - Back Dual Camera&#xA;2023-04-24 10:17:21.716247&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.716           darwin_dev.m  ... 4: [Capturer] AVF - Back Telephoto Camera&#xA;&#xA;2023-04-24 10:17:21.716375&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.716 [SIPSample void logCallBack(int, const char *, int):1034]           darwin_dev.m  ... 4: [Capturer] AVF - Back Telephoto Camera&#xA;2023-04-24 10:17:21.716409&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.716         colorbar_dev.c  ...Colorbar video src initialized with 2 device(s):&#xA;&#xA;2023-04-24 10:17:21.716673&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.716         colorbar_dev.c  ... 0: Colorbar generator&#xA;&#xA;2023-04-24 10:17:21.716764&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.717 [SIPSample void logCallBack(int, const char *, int):1034]         colorbar_dev.c  ...Colorbar video src initialized with 2 device(s):&#xA;2023-04-24 10:17:21.716918&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.716         colorbar_dev.c  ... 1: Colorbar-active&#xA;&#xA;2023-04-24 10:17:21.716938&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.717 [SIPSample void logCallBack(int, const char *, int):1034]         colorbar_dev.c  ... 0: Colorbar generator&#xA;2023-04-24 10:17:21.717192&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.717 [SIPSample void logCallBack(int, const char *, int):1034]         colorbar_dev.c  ... 1: Colorbar-active&#xA;2023-04-24 10:17:21.717528&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.717         sip_endpoint.c  .Module "mod-evsub" registered&#xA;&#xA;2023-04-24 10:17:21.717645&#x2B;0700 PSUAKitSample[83000:15642975] &#128154; DEBUG   10:17:21.718 [SIPSample void logCallBack(int, const char *, int):1034]         sip_endpoint.c  .Module "mod-evsub" registered&#xA;2023-04-24 10:17:21.717710&#x2B;0700 PSUAKitSample[83000:15642817] 10:17:21.717         sip_endpoint.c  .Module "mod-presence" registered&#xA;

    &#xA;

  • To all Matomo plugin developers : Matomo 5 is coming, make your plugin compatible now

    5 mai 2023, par Matomo Core Team — Development

    We’re planning to release the first beta of Matomo 5 in a few weeks. For making it easy for Matomo users to be able to upgrade to this beta, it would be great to have as many plugins on the Marketplace as possible already updated and compatible with Matomo 5. Then many users would be able to upgrade to the first beta without any issues.

    Presumably, as you put your plugin on our Marketplace, you want people to use it. Making your plugin compatible with Matomo 5 helps ensure that people will be able to find and keep using your plugin. If your plugin is not compatible with Matomo 5, your plugin will be automatically deactivated in Matomo 5 instances. We’ll be happy to help you achieve compatibility should there be any issue.

    How do I upgrade my Matomo instance to Matomo 5 ?

    If you have installed your Matomo development environment through git, you can simply checkout the Matomo 5 branch “5.x-dev” and install its dependencies by executing these commands :

    • git checkout 5.x-dev
    • composer install

    Alternatively, you can also download the latest version directly from GitHub as a zip file and run composer install afterwards.

    How do I upgrade my plugin to Matomo 5 ?

    While there are some breaking changes in Matomo 5, most of our Platform APIs remain unchanged, and almost all changes are for rarely used APIs. Quite often, making your plugin compatible with Matomo 5 will just be a matter of adjusting the “plugin.json” file (as mentioned in the migration guide).

    You can find all developer documentation on our developer zone which has already been updated for Matomo 5.

    How do I know my plugin changes were released successfully ?

    If you have configured an email address within your “plugin.json” file, then you will receive a confirmation or an error email within a few minutes. Alternatively, you can also check out your plugin page on the Marketplace directly. If the plugin release was successful, you will see additional links below the download button showing which versions your plugin is compatible with.

    what it looks like when your plugin is compatible with multiple Matomo versions

    How can switch between Matomo 4 and Matomo 5 or downgrade to Matomo 4 ?

    To downgrade from Matomo 5 to Matomo 4 in your Matomo development environment :

    • check out the “4.x-dev” branch 
    • run “composer install” as usual

    When will the final Matomo 5 release be available ?

    We estimate the final stable Matomo 5.0.0 release will be released in approx. 2-3 months.

    What is new in Matomo 5 ?

    We don’t have a summary of the changes available just yet but you can see all closed issues within this release here.

    Any questions or need help ?

    If you have any questions, or experience any problems during the migration, don’t hesitate to get in touch with us. We’ll be happy to help get your plugin compatible and the update published. If you find any undocumented breaking change or find any step during the migration process not clear, please let us know as well.

    Thank you for contributing a plugin to the Marketplace and making Matomo better. We really appreciate your work !