Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (98)

  • Activation de l’inscription des visiteurs

    12 avril 2011, par

    Il est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
    Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
    Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8868)

  • Announcing our latest open source project : DeviceDetector

    30 juillet 2014, par Stefan Giehl — Community, Development, Meta, DeviceDetector

    This blog post is an announcement for our latest open source project release : DeviceDetector ! The Universal Device Detection library will parse any User Agent and detect the browser, operating system, device used (desktop, tablet, mobile, tv, cars, console, etc.), brand and model.

    Read on to learn more about this exciting release.

    Why did we create DeviceDetector ?

    Our previous library UserAgentParser only had the possibility to detect operating systems and browsers. But as more and more traffic is coming from mobile devices like smartphones and tablets it is getting more and more important to know which devices are used by the websites visitors.

    To ensure that the device detection within Piwik will gain the required attention, so it will be as accurate as possible, we decided to move that part of Piwik into a separate project, that we will maintain separately. As an own project we hope the DeviceDetector will gain a better visibility as well as a better support by and for the community !

    DeviceDetector is hosted on GitHub at piwik/device-detector. It is also available as composer package through Packagist.

    How DeviceDetector works

    Every client requesting data from a webserver identifies itself by sending a so-called User-Agent within the request to the server. Those User Agents might contain several information such as :

    • client name and version (clients can be browsers or other software like feed readers, media players, apps,…)
    • operating system name and version
    • device identifier, which can be used to detect the brand and model.

    For Example :

    Mozilla/5.0 (Linux; Android 4.4.2; Nexus 5 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.99 Mobile Safari/537.36

    This User Agent contains following information :

    Operating system is Android 4.4.2, client uses the browser Chrome Mobile 32.0.1700.99 and the device is a Google Nexus 5 smartphone.

    What DeviceDetector currently detects

    DeviceDetector is able to detect bots, like search engines, feed fetchers, site monitors and so on, five different client types, including around 100 browsers, 15 feed readers, some media players, personal information managers (like mail clients) and mobile apps using the AFNetworking framework, around 80 operating systems and nine different device types (smartphones, tablets, feature phones, consoles, tvs, car browsers, cameras, smart displays and desktop devices) from over 180 brands.

    Note : Piwik itself currently does not use the full feature set of DeviceDetector. Client detection is currently not implemented in Piwik (only detected browsers are reported, other clients are marked as Unknown). Client detection will be implemented into Piwik in the future, follow #5413 to stay updated.

    Performance of DeviceDetector

    Our detections are currently handled by an enormous number of regexes, that are defined in several .YML Files. As parsing these .YML files is a bit slow, DeviceDetector is able to cache the parsed .YML Files. By default DeviceDetector uses a static cache, which means that everything is cached in static variables. As that only improves speed for many detections within one process, there are also adapters to cache in files or memcache for speeding up detections across requests.

    How can users help contribute to DeviceDetector ?

    Submit your devices that are not detected yet

    If you own a device, that is currently not correctly detected by the DeviceDetector, please create a issue on GitHub
    In order to check if your device is detected correctly by the DeviceDetector go to your Piwik server, click on ‘Settings’ link, then click on ‘Device Detection’ under the Diagnostic menu. If the data does not match, please copy the displayed User Agent and use that and your device data to create a ticket.

    Submit a list of your User Agents

    In order to create new detections or improve the existing ones, it is necessary for us to have lists of User Agents. If you have a website used by mostly non desktop devices it would be useful if you send a list of the User Agents that visited your website. To do so you need access to your access logs. The following command will extract the User Agents :

    zcat ~/path/to/access/logs* | awk -F'"' '{print $6}' | sort | uniq -c | sort -rn | head -n20000 > /home/piwik/top-user-agents.txt

    If you want to help us with those data, please get in touch at devicedetector@piwik.org

    Submit improvements on GitHub

    As DeviceDetector is free/libre library, we invite you to help us improving the detections as well as the code. Please feel free to create tickets and pull requests on Github.

    What’s the next big thing for DeviceDetector ?

    Please check out the list of issues in device-detector issue tracker.

    We hope the community will answer our call for help. Together, we can build DeviceDetector as the most powerful device detection library !

    Happy Device Detection,

  • avformat/iamf_writer : add support for expanded channel layouts

    10 décembre 2024, par James Almer
    avformat/iamf_writer : add support for expanded channel layouts
    

    Defined in Immersive Audio Model and Formats 1.1.0, sections 3.6.2 and 3.7.3

    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavformat/iamf_writer.c
  • How to Stream Audio from Google Cloud Storage in Chunks and Convert Each Chunk to WAV for Whisper Transcription

    14 novembre 2024, par Douglas Landvik

    I'm working on a project where I need to transcribe audio stored in a Google Cloud Storage bucket using OpenAI's Whisper model. The audio is stored in WebM format with Opus encoding, and due to the file size, I'm streaming the audio in 30-second chunks.

    &#xA;

    To convert each chunk to WAV (16 kHz, mono, 16-bit PCM) compatible with Whisper, I'm using FFmpeg. The first chunk converts successfully, but subsequent chunks fail to convert. I suspect this is because each chunk lacks the WebM container's header, which FFmpeg needs to interpret the Opus codec correctly.

    &#xA;

    Here’s a simplified version of my approach :

    &#xA;

    Download Chunk : I download each chunk from GCS as bytes.&#xA;Convert with FFmpeg : I pass the bytes to FFmpeg to convert each chunk from WebM/Opus to WAV.

    &#xA;

    async def handle_transcription_and_notify(&#xA;    consultation_service: ConsultationService,&#xA;    consultation_processor: ConsultationProcessor,&#xA;    consultation: Consultation,&#xA;    language: str,&#xA;    notes: str,&#xA;    clinic_id: str,&#xA;    vet_email: str,&#xA;    trace_id: str,&#xA;    blob_path: str,&#xA;    max_retries: int = 3,&#xA;    retry_delay: int = 5,&#xA;    max_concurrent_tasks: int = 3&#xA;):&#xA;    """&#xA;    Handles the transcription process by streaming the file from GCS, converting to a compatible format, &#xA;    and notifying the client via WebSocket.&#xA;    """&#xA;    chunk_duration_sec = 30  # 30 seconds per chunk&#xA;    logger.info(f"Starting transcription process for consultation {consultation.consultation_id}",&#xA;                extra={&#x27;trace_id&#x27;: trace_id})&#xA;&#xA;    # Initialize GCS client&#xA;    service_account_key = os.environ.get(&#x27;SERVICE_ACCOUNT_KEY_BACKEND&#x27;)&#xA;    if not service_account_key:&#xA;        logger.error("Service account key not found in environment variables", extra={&#x27;trace_id&#x27;: trace_id})&#xA;        await send_discord_alert(&#xA;            f"Service account key not found for consultation {consultation.consultation_id}.\nTrace ID: {trace_id}"&#xA;        )&#xA;        return&#xA;&#xA;    try:&#xA;        service_account_info = json.loads(service_account_key)&#xA;        credentials = service_account.Credentials.from_service_account_info(service_account_info)&#xA;    except Exception as e:&#xA;        logger.error(f"Error loading service account credentials: {str(e)}", extra={&#x27;trace_id&#x27;: trace_id})&#xA;        await send_discord_alert(&#xA;            f"Error loading service account credentials for consultation {consultation.consultation_id}.\nError: {str(e)}\nTrace ID: {trace_id}"&#xA;        )&#xA;        return&#xA;&#xA;    # Initialize GCS client&#xA;    service_account_key = os.environ.get(&#x27;SERVICE_ACCOUNT_KEY_BACKEND&#x27;)&#xA;    if not service_account_key:&#xA;        logger.error("Service account key not found in environment variables", extra={&#x27;trace_id&#x27;: trace_id})&#xA;        await send_discord_alert(&#xA;            f"Service account key not found for consultation {consultation.consultation_id}.\nTrace ID: {trace_id}"&#xA;        )&#xA;        return&#xA;&#xA;    try:&#xA;        service_account_info = json.loads(service_account_key)&#xA;        credentials = service_account.Credentials.from_service_account_info(service_account_info)&#xA;    except Exception as e:&#xA;        logger.error(f"Error loading service account credentials: {str(e)}", extra={&#x27;trace_id&#x27;: trace_id})&#xA;        await send_discord_alert(&#xA;            f"Error loading service account credentials for consultation {consultation.consultation_id}.\nError: {str(e)}\nTrace ID: {trace_id}"&#xA;        )&#xA;        return&#xA;&#xA;    storage_client = storage.Client(credentials=credentials)&#xA;    bucket_name = &#x27;vetz_consultations&#x27;&#xA;    blob = storage_client.bucket(bucket_name).get_blob(blob_path)&#xA;    bytes_per_second = 16000 * 2  # 32,000 bytes per second&#xA;    chunk_size_bytes = 30 * bytes_per_second&#xA;    size = blob.size&#xA;&#xA;    async def stream_blob_in_chunks(blob, chunk_size):&#xA;        loop = asyncio.get_running_loop()&#xA;        start = 0&#xA;        size = blob.size&#xA;        while start &lt; size:&#xA;            end = min(start &#x2B; chunk_size - 1, size - 1)&#xA;            try:&#xA;                logger.info(f"Requesting chunk from {start} to {end}", extra={&#x27;trace_id&#x27;: trace_id})&#xA;                chunk = await loop.run_in_executor(&#xA;                    None, lambda: blob.download_as_bytes(start=start, end=end)&#xA;                )&#xA;                if not chunk:&#xA;                    break&#xA;                logger.info(f"Yielding chunk from {start} to {end}, size: {len(chunk)} bytes",&#xA;                            extra={&#x27;trace_id&#x27;: trace_id})&#xA;                yield chunk&#xA;                start &#x2B;= chunk_size&#xA;            except Exception as e:&#xA;                logger.error(f"Error downloading chunk from {start} to {end}: {str(e)}", exc_info=True,&#xA;                             extra={&#x27;trace_id&#x27;: trace_id})&#xA;                raise e&#xA;&#xA;    async def convert_to_wav(chunk_bytes, chunk_idx):&#xA;        """&#xA;        Convert audio chunk to WAV format compatible with Whisper, ensuring it&#x27;s 16 kHz, mono, and 16-bit PCM.&#xA;        """&#xA;        try:&#xA;            logger.debug(f"Processing chunk {chunk_idx}: size = {len(chunk_bytes)} bytes")&#xA;&#xA;            detected_format = await detect_audio_format(chunk_bytes)&#xA;            logger.info(f"Detected audio format for chunk {chunk_idx}: {detected_format}")&#xA;            input_io = io.BytesIO(chunk_bytes)&#xA;            output_io = io.BytesIO()&#xA;&#xA;            # ffmpeg command to convert webm/opus to WAV with 16 kHz, mono, and 16-bit PCM&#xA;&#xA;            # ffmpeg command with debug information&#xA;            ffmpeg_command = [&#xA;                "ffmpeg",&#xA;                "-loglevel", "debug",&#xA;                "-f", "s16le",            # Treat input as raw PCM data&#xA;                "-ar", "48000",           # Set input sample rate&#xA;                "-ac", "1",               # Set input to mono&#xA;                "-i", "pipe:0",&#xA;                "-ar", "16000",           # Set output sample rate to 16 kHz&#xA;                "-ac", "1",               # Ensure mono output&#xA;                "-sample_fmt", "s16",     # Set output format to 16-bit PCM&#xA;                "-f", "wav",              # Output as WAV format&#xA;                "pipe:1"&#xA;            ]&#xA;&#xA;            process = subprocess.Popen(&#xA;                ffmpeg_command,&#xA;                stdin=subprocess.PIPE,&#xA;                stdout=subprocess.PIPE,&#xA;                stderr=subprocess.PIPE&#xA;            )&#xA;&#xA;            stdout, stderr = process.communicate(input=input_io.read())&#xA;&#xA;            if process.returncode == 0:&#xA;                logger.info(f"FFmpeg conversion completed successfully for chunk {chunk_idx}")&#xA;                output_io.write(stdout)&#xA;                output_io.seek(0)&#xA;&#xA;                # Save the WAV file locally for listening&#xA;                output_dir = "converted_chunks"&#xA;                os.makedirs(output_dir, exist_ok=True)&#xA;                file_path = os.path.join(output_dir, f"chunk_{chunk_idx}.wav")&#xA;&#xA;                with open(file_path, "wb") as f:&#xA;                    f.write(stdout)&#xA;                logger.info(f"Chunk {chunk_idx} saved to {file_path}")&#xA;&#xA;                return output_io&#xA;            else:&#xA;                logger.error(f"FFmpeg failed for chunk {chunk_idx} with return code {process.returncode}")&#xA;                logger.error(f"Chunk {chunk_idx} - FFmpeg stderr: {stderr.decode()}")&#xA;                return None&#xA;&#xA;        except Exception as e:&#xA;            logger.error(f"Unexpected error in FFmpeg conversion for chunk {chunk_idx}: {str(e)}")&#xA;            return None&#xA;&#xA;    async def transcribe_chunk(idx, chunk_bytes):&#xA;        for attempt in range(1, max_retries &#x2B; 1):&#xA;            try:&#xA;                logger.info(f"Transcribing chunk {idx &#x2B; 1} (attempt {attempt}).", extra={&#x27;trace_id&#x27;: trace_id})&#xA;&#xA;                # Convert to WAV format&#xA;                wav_io = await convert_to_wav(chunk_bytes, idx)&#xA;                if not wav_io:&#xA;                    logger.error(f"Failed to convert chunk {idx &#x2B; 1} to WAV format.")&#xA;                    return ""&#xA;&#xA;                wav_io.name = "chunk.wav"&#xA;                chunk_transcription = await consultation_processor.transcribe_audio_whisper(wav_io)&#xA;                logger.info(f"Chunk {idx &#x2B; 1} transcribed successfully.", extra={&#x27;trace_id&#x27;: trace_id})&#xA;                return chunk_transcription&#xA;            except Exception as e:&#xA;                logger.error(f"Error transcribing chunk {idx &#x2B; 1} (attempt {attempt}): {str(e)}", exc_info=True,&#xA;                             extra={&#x27;trace_id&#x27;: trace_id})&#xA;                if attempt &lt; max_retries:&#xA;                    await asyncio.sleep(retry_delay)&#xA;                else:&#xA;                    await send_discord_alert(&#xA;                        f"Max retries reached for chunk {idx &#x2B; 1} in consultation {consultation.consultation_id}.\nError: {str(e)}\nTrace ID: {trace_id}"&#xA;                    )&#xA;                    return ""  # Return empty string for failed chunk&#xA;&#xA;    await notification_manager.send_personal_message(&#xA;        f"Consultation {consultation.consultation_id} is being transcribed.", vet_email&#xA;    )&#xA;&#xA;    try:&#xA;        idx = 0&#xA;        full_transcription = []&#xA;        async for chunk in stream_blob_in_chunks(blob, chunk_size_bytes):&#xA;            transcription = await transcribe_chunk(idx, chunk)&#xA;            if transcription:&#xA;                full_transcription.append(transcription)&#xA;            idx &#x2B;= 1&#xA;&#xA;        combined_transcription = " ".join(full_transcription)&#xA;        consultation.full_transcript = (consultation.full_transcript or "") &#x2B; " " &#x2B; combined_transcription&#xA;        consultation_service.save_consultation(clinic_id, vet_email, consultation)&#xA;        logger.info(f"Transcription saved for consultation {consultation.consultation_id}.",&#xA;                    extra={&#x27;trace_id&#x27;: trace_id})&#xA;&#xA;    except Exception as e:&#xA;        logger.error(f"Error during transcription process: {str(e)}", exc_info=True, extra={&#x27;trace_id&#x27;: trace_id})&#xA;        await send_discord_alert(&#xA;            f"Error during transcription process for consultation {consultation.consultation_id}.\nError: {str(e)}\nTrace ID: {trace_id}"&#xA;        )&#xA;        return&#xA;&#xA;    await notification_manager.send_personal_message(&#xA;        f"Consultation {consultation.consultation_id} has been transcribed.", vet_email&#xA;    )&#xA;&#xA;    try:&#xA;        template_service = TemplateService()&#xA;        medical_record_template = template_service.get_template_by_name(&#xA;            consultation.medical_record_template_id).sections&#xA;&#xA;        sections = await consultation_processor.extract_structured_sections(&#xA;            transcription=consultation.full_transcript,&#xA;            notes=notes,&#xA;            language=language,&#xA;            template=medical_record_template,&#xA;        )&#xA;        consultation.sections = sections&#xA;        consultation_service.save_consultation(clinic_id, vet_email, consultation)&#xA;        logger.info(f"Sections processed for consultation {consultation.consultation_id}.",&#xA;                    extra={&#x27;trace_id&#x27;: trace_id})&#xA;    except Exception as e:&#xA;        logger.error(f"Error processing sections for consultation {consultation.consultation_id}: {str(e)}",&#xA;                     exc_info=True, extra={&#x27;trace_id&#x27;: trace_id})&#xA;        await send_discord_alert(&#xA;            f"Error processing sections for consultation {consultation.consultation_id}.\nError: {str(e)}\nTrace ID: {trace_id}"&#xA;        )&#xA;        raise e&#xA;&#xA;    await notification_manager.send_personal_message(&#xA;        f"Consultation {consultation.consultation_id} is fully processed.", vet_email&#xA;    )&#xA;    logger.info(f"Successfully processed consultation {consultation.consultation_id}.",&#xA;                extra={&#x27;trace_id&#x27;: trace_id})&#xA;&#xA;

    &#xA;