
Recherche avancée
Autres articles (73)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (10846)
-
How to stream synchronized video and audio in real-time from an Android smartphone using HLS while preserving orientation metadata ?
6 mars, par Jérôme LAROSEHello, 
I am working on an Android application where I need to stream video
from one or two cameras on my smartphone, along with audio from the
microphone, in real-time via a link or web page accessible to users.
The stream should be live, allow rewinding (DVR functionality), and be
recorded simultaneously. A latency of 1 to 2 minutes is acceptable,
and the streaming is one-way. 

I have chosen HLS (HTTP Live Streaming) for its browser compatibility
and DVR support. However, I am encountering issues with audio-video
synchronization, managing camera orientation metadata, and format
conversions.



Here are my attempts :


- 

-
MP4 segmentation with
MediaRecorder


- 

- I used
MediaRecorder
withsetNextOutputFile
to generate short MP4 segments, thenffmpeg-kit
to convert them to fMP4 for HLS. - Expected : Well-aligned segments for smooth HLS playback.
- Result : Timestamp issues causing jumps or interruptions in playback.








- I used
-
MPEG2-TS via local socket


- 

- I configured
MediaRecorder
to produce an MPEG2-TS stream sent via a local socket toffmpeg-kit
. - Expected : Stable streaming with preserved metadata.
- Result : Streaming works, but orientation metadata is lost, leading to incorrectly oriented video (e.g., rotated 90°).








- I configured
-
Orientation correction with
ffmpeg


- 

- I tested
-vf transpose=1
inffmpeg
to correct the orientation. - Expected : Correctly oriented video without excessive latency.
- Result : Re-encoding takes too long for real-time streaming, causing unacceptable latency.








- I tested
-
MPEG2-TS to fMP4 conversion


- 

- I converted the MPEG2-TS stream to fMP4 with
ffmpeg
to preserve orientation. - Expected : Perfect audio-video synchronization.
- Result : Slight desynchronization between audio and video, affecting the user experience.








- I converted the MPEG2-TS stream to fMP4 with










I am looking for a solution to :


- 

- Stream an HLS feed from Android with correctly timestamped segments.
- Preserve orientation metadata without heavy re-encoding.
- Ensure perfect audio-video synchronization.








UPDATE


package com.example.angegardien

import android.Manifest
import android.content.Context
import android.content.pm.PackageManager
import android.graphics.SurfaceTexture
import android.hardware.camera2.*
import android.media.*
import android.os.*
import android.util.Log
import android.view.Surface
import android.view.TextureView
import android.view.WindowManager
import androidx.activity.ComponentActivity
import androidx.core.app.ActivityCompat
import com.arthenica.ffmpegkit.FFmpegKit
import fi.iki.elonen.NanoHTTPD
import kotlinx.coroutines.*
import java.io.File
import java.io.IOException
import java.net.ServerSocket
import android.view.OrientationEventListener

/**
 * MainActivity class:
 * - Manages camera operations using the Camera2 API.
 * - Records video using MediaRecorder.
 * - Pipes data to FFmpeg to generate HLS segments.
 * - Hosts a local HLS server using NanoHTTPD to serve the generated HLS content.
 */
class MainActivity : ComponentActivity() {

 // TextureView used for displaying the camera preview.
 private lateinit var textureView: TextureView
 // Camera device instance.
 private lateinit var cameraDevice: CameraDevice
 // Camera capture session for managing capture requests.
 private lateinit var cameraCaptureSession: CameraCaptureSession
 // CameraManager to access camera devices.
 private lateinit var cameraManager: CameraManager
 // Directory where HLS output files will be stored.
 private lateinit var hlsDir: File
 // Instance of the HLS server.
 private lateinit var hlsServer: HlsServer

 // Camera id ("1" corresponds to the rear camera).
 private val cameraId = "1"
 // Flag indicating whether recording is currently active.
 private var isRecording = false

 // MediaRecorder used for capturing audio and video.
 private lateinit var activeRecorder: MediaRecorder
 // Surface for the camera preview.
 private lateinit var previewSurface: Surface
 // Surface provided by MediaRecorder for recording.
 private lateinit var recorderSurface: Surface

 // Port for the FFmpeg local socket connection.
 private val ffmpegPort = 8080

 // Coroutine scope to manage asynchronous tasks.
 private val scope = CoroutineScope(Dispatchers.IO + SupervisorJob())

 // Variables to track current device rotation and listen for orientation changes.
 private var currentRotation = 0
 private lateinit var orientationListener: OrientationEventListener

 override fun onCreate(savedInstanceState: Bundle?) {
 super.onCreate(savedInstanceState)

 // Initialize the TextureView and set it as the content view.
 textureView = TextureView(this)
 setContentView(textureView)

 // Get the CameraManager system service.
 cameraManager = getSystemService(CAMERA_SERVICE) as CameraManager
 // Setup the directory for HLS output.
 setupHLSDirectory()

 // Start the local HLS server on port 8081.
 hlsServer = HlsServer(8081, hlsDir, this)
 try {
 hlsServer.start()
 Log.d("HLS_SERVER", "HLS Server started on port 8081")
 } catch (e: IOException) {
 Log.e("HLS_SERVER", "Error starting HLS Server", e)
 }

 // Initialize the current rotation.
 currentRotation = getDeviceRotation()

 // Add a listener to detect orientation changes.
 orientationListener = object : OrientationEventListener(this) {
 override fun onOrientationChanged(orientation: Int) {
 if (orientation == ORIENTATION_UNKNOWN) return // Skip unknown orientations.
 // Determine the new rotation angle.
 val newRotation = when {
 orientation >= 315 || orientation < 45 -> 0
 orientation >= 45 && orientation < 135 -> 90
 orientation >= 135 && orientation < 225 -> 180
 orientation >= 225 && orientation < 315 -> 270
 else -> 0
 }
 // If the rotation has changed and recording is active, update the rotation.
 if (newRotation != currentRotation && isRecording) {
 Log.d("ROTATION", "Orientation change detected: $newRotation")
 currentRotation = newRotation
 }
 }
 }
 orientationListener.enable()

 // Set up the TextureView listener to know when the surface is available.
 textureView.surfaceTextureListener = object : TextureView.SurfaceTextureListener {
 override fun onSurfaceTextureAvailable(surface: SurfaceTexture, width: Int, height: Int) {
 // Open the camera when the texture becomes available.
 openCamera()
 }
 override fun onSurfaceTextureSizeChanged(surface: SurfaceTexture, width: Int, height: Int) {}
 override fun onSurfaceTextureDestroyed(surface: SurfaceTexture) = false
 override fun onSurfaceTextureUpdated(surface: SurfaceTexture) {}
 }
 }

 /**
 * Sets up the HLS directory in the public Downloads folder.
 * If the directory exists, it deletes it recursively and creates a new one.
 */
 private fun setupHLSDirectory() {
 val downloadsDir = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS)
 hlsDir = File(downloadsDir, "HLS_Output")

 if (hlsDir.exists()) {
 hlsDir.deleteRecursively()
 }
 hlsDir.mkdirs()

 Log.d("HLS", "📂 HLS folder created: ${hlsDir.absolutePath}")
 }

 /**
 * Opens the camera after checking for necessary permissions.
 */
 private fun openCamera() {
 if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED ||
 ActivityCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
 // Request permissions if they are not already granted.
 ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.CAMERA, Manifest.permission.RECORD_AUDIO), 101)
 return
 }

 try {
 // Open the specified camera using its cameraId.
 cameraManager.openCamera(cameraId, object : CameraDevice.StateCallback() {
 override fun onOpened(camera: CameraDevice) {
 cameraDevice = camera
 // Start the recording session once the camera is opened.
 startNextRecording()
 }
 override fun onDisconnected(camera: CameraDevice) { camera.close() }
 override fun onError(camera: CameraDevice, error: Int) { camera.close() }
 }, null)
 } catch (e: CameraAccessException) {
 e.printStackTrace()
 }
 }

 /**
 * Starts a new recording session:
 * - Sets up the preview and recorder surfaces.
 * - Creates a pipe for MediaRecorder output.
 * - Creates a capture session for simultaneous preview and recording.
 */
 private fun startNextRecording() {
 // Get the SurfaceTexture from the TextureView and set its default buffer size.
 val texture = textureView.surfaceTexture!!
 texture.setDefaultBufferSize(1920, 1080)
 // Create the preview surface.
 previewSurface = Surface(texture)

 // Create and configure the MediaRecorder.
 activeRecorder = createMediaRecorder()

 // Create a pipe to route MediaRecorder data.
 val pipe = ParcelFileDescriptor.createPipe()
 val pfdWrite = pipe[1] // Write end used by MediaRecorder.
 val pfdRead = pipe[0] // Read end used by the local socket server.

 // Set MediaRecorder output to the file descriptor of the write end.
 activeRecorder.setOutputFile(pfdWrite.fileDescriptor)
 setupMediaRecorder(activeRecorder)
 // Obtain the recorder surface from MediaRecorder.
 recorderSurface = activeRecorder.surface

 // Create a capture request using the RECORD template.
 val captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD)
 captureRequestBuilder.addTarget(previewSurface)
 captureRequestBuilder.addTarget(recorderSurface)

 // Create a capture session including both preview and recorder surfaces.
 cameraDevice.createCaptureSession(
 listOf(previewSurface, recorderSurface),
 object : CameraCaptureSession.StateCallback() {
 override fun onConfigured(session: CameraCaptureSession) {
 cameraCaptureSession = session
 captureRequestBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO)
 // Start a continuous capture request.
 cameraCaptureSession.setRepeatingRequest(captureRequestBuilder.build(), null, null)

 // Launch a coroutine to start FFmpeg and MediaRecorder with synchronization.
 scope.launch {
 startFFmpeg()
 delay(500) // Wait for FFmpeg to be ready.
 activeRecorder.start()
 isRecording = true
 Log.d("HLS", "🎥 Recording started...")
 }

 // Launch a coroutine to run the local socket server to forward data.
 scope.launch {
 startLocalSocketServer(pfdRead)
 }
 }
 override fun onConfigureFailed(session: CameraCaptureSession) {
 Log.e("Camera2", "❌ Configuration failed")
 }
 },
 null
 )
 }

 /**
 * Coroutine to start a local socket server.
 * It reads from the MediaRecorder pipe and sends the data to FFmpeg.
 */
 private suspend fun startLocalSocketServer(pfdRead: ParcelFileDescriptor) {
 withContext(Dispatchers.IO) {
 val serverSocket = ServerSocket(ffmpegPort)
 Log.d("HLS", "Local socket server started on port $ffmpegPort")

 // Accept connection from FFmpeg.
 val socket = serverSocket.accept()
 Log.d("HLS", "Connection accepted from FFmpeg")

 // Read data from the pipe and forward it through the socket.
 val inputStream = ParcelFileDescriptor.AutoCloseInputStream(pfdRead)
 val outputStream = socket.getOutputStream()
 val buffer = ByteArray(8192)
 var bytesRead: Int
 while (inputStream.read(buffer).also { bytesRead = it } != -1) {
 outputStream.write(buffer, 0, bytesRead)
 }
 outputStream.close()
 inputStream.close()
 socket.close()
 serverSocket.close()
 }
 }

 /**
 * Coroutine to start FFmpeg using a local TCP input.
 * Applies a video rotation filter based on device orientation and generates HLS segments.
 */
 private suspend fun startFFmpeg() {
 withContext(Dispatchers.IO) {
 // Retrieve the appropriate transpose filter based on current rotation.
 val transposeFilter = getTransposeFilter(currentRotation)

 // FFmpeg command to read from the TCP socket and generate an HLS stream.
 // Two alternative commands are commented below.
 // val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -c copy -bsf:a aac_adtstoasc -movflags +faststart -f dash -seg_duration 10 -hls_playlist 1 ${hlsDir.absolutePath}/manifest.mpd"
 // val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -c copy -bsf:a aac_adtstoasc -movflags +faststart -f hls -hls_time 5 -hls_segment_type fmp4 -hls_flags split_by_time -hls_list_size 0 -hls_playlist_type event -hls_fmp4_init_filename init.mp4 -hls_segment_filename ${hlsDir.absolutePath}/segment_%03d.m4s ${hlsDir.absolutePath}/playlist.m3u8"
 val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -vf $transposeFilter -c:v libx264 -preset ultrafast -crf 23 -c:a copy -movflags +faststart -f hls -hls_time 0.1 -hls_segment_type mpegts -hls_flags split_by_time -hls_list_size 0 -hls_playlist_type event -hls_segment_filename ${hlsDir.absolutePath}/segment_%03d.ts ${hlsDir.absolutePath}/playlist.m3u8"

 FFmpegKit.executeAsync(ffmpegCommand) { session ->
 if (session.returnCode.isValueSuccess) {
 Log.d("HLS", "✅ HLS generated successfully")
 } else {
 Log.e("FFmpeg", "❌ Error generating HLS: ${session.allLogsAsString}")
 }
 }
 }
 }

 /**
 * Gets the current device rotation using the WindowManager.
 */
 private fun getDeviceRotation(): Int {
 val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager
 return when (windowManager.defaultDisplay.rotation) {
 Surface.ROTATION_0 -> 0
 Surface.ROTATION_90 -> 90
 Surface.ROTATION_180 -> 180
 Surface.ROTATION_270 -> 270
 else -> 0
 }
 }

 /**
 * Returns the FFmpeg transpose filter based on the rotation angle.
 * Used to rotate the video stream accordingly.
 */
 private fun getTransposeFilter(rotation: Int): String {
 return when (rotation) {
 90 -> "transpose=1" // 90° clockwise
 180 -> "transpose=2,transpose=2" // 180° rotation
 270 -> "transpose=2" // 90° counter-clockwise
 else -> "transpose=0" // No rotation
 }
 }

 /**
 * Creates and configures a MediaRecorder instance.
 * Sets up audio and video sources, formats, encoders, and bitrates.
 */
 private fun createMediaRecorder(): MediaRecorder {
 return MediaRecorder().apply {
 setAudioSource(MediaRecorder.AudioSource.MIC)
 setVideoSource(MediaRecorder.VideoSource.SURFACE)
 setOutputFormat(MediaRecorder.OutputFormat.MPEG_2_TS)
 setVideoEncodingBitRate(5000000)
 setVideoFrameRate(24)
 setVideoSize(1080, 720)
 setVideoEncoder(MediaRecorder.VideoEncoder.H264)
 setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
 setAudioSamplingRate(16000)
 setAudioEncodingBitRate(96000) // 96 kbps
 }
 }

 /**
 * Prepares the MediaRecorder and logs the outcome.
 */
 private fun setupMediaRecorder(recorder: MediaRecorder) {
 try {
 recorder.prepare()
 Log.d("HLS", "✅ MediaRecorder prepared")
 } catch (e: IOException) {
 Log.e("HLS", "❌ Error preparing MediaRecorder", e)
 }
 }

 /**
 * Custom HLS server class extending NanoHTTPD.
 * Serves HLS segments and playlists from the designated HLS directory.
 */
 private inner class HlsServer(port: Int, private val hlsDir: File, private val context: Context) : NanoHTTPD(port) {
 override fun serve(session: IHTTPSession): Response {
 val uri = session.uri.trimStart('/')

 // Intercept the request for `init.mp4` and serve it from assets.
 /*
 if (uri == "init.mp4") {
 Log.d("HLS Server", "📡 Intercepting init.mp4, sending file from assets...")
 return try {
 val assetManager = context.assets
 val inputStream = assetManager.open("init.mp4")
 newFixedLengthResponse(Response.Status.OK, "video/mp4", inputStream, inputStream.available().toLong())
 } catch (e: Exception) {
 Log.e("HLS Server", "❌ Error reading init.mp4 from assets: ${e.message}")
 newFixedLengthResponse(Response.Status.INTERNAL_ERROR, MIME_PLAINTEXT, "Server error")
 }
 }
 */

 // Serve all other HLS files normally from the hlsDir.
 val file = File(hlsDir, uri)
 return if (file.exists()) {
 newFixedLengthResponse(Response.Status.OK, getMimeTypeForFile(uri), file.inputStream(), file.length())
 } else {
 newFixedLengthResponse(Response.Status.NOT_FOUND, MIME_PLAINTEXT, "File not found")
 }
 }
 }

 /**
 * Clean up resources when the activity is destroyed.
 * Stops recording, releases the camera, cancels coroutines, and stops the HLS server.
 */
 override fun onDestroy() {
 super.onDestroy()
 if (isRecording) {
 activeRecorder.stop()
 activeRecorder.release()
 }
 cameraDevice.close()
 scope.cancel()
 hlsServer.stop()
 orientationListener.disable()
 Log.d("HLS", "🛑 Activity destroyed")
 }
}



I have three examples of ffmpeg commands.


- 

- One command segments into DASH, but the camera does not have the correct rotation.
- One command segments into HLS without re-encoding with 5-second segments ; it’s fast but does not have the correct rotation.
- One command segments into HLS with re-encoding, which applies a rotation. It’s too slow for 5-second segments, so a 1-second segment was chosen.








Note :


- 

- In the second command ("One command segments into HLS without re-encoding with 5-second segments ; it’s fast but does not have the correct rotation."), it returns fMP4. To achieve the correct rotation, I provide a preconfigured
init.mp4
file during the HTTP request to retrieve it (see comment). - In the third command ("One command segments into HLS with re-encoding, which applies a rotation. It’s too slow for 5-second segments, so a 1-second segment was chosen."), it returns TS.






-
-
Multilingual SEO : A Marketer’s Guide to Measuring and Optimising Multilingual Websites
26 juin, par JoeThe web—and search engines in particular—make it easier than ever for businesses of any size to reach an international audience.
A multilingual website makes sense, especially when the majority of websites are in English. After all, you want to stand out to customers by speaking their local language. But it’s no good having a multilingual site if people can’t find it.That’s where multilingual SEO comes in.
In this article, we’ll show you how to build a multilingual website that ranks in Google and other local search engines. You’ll learn why multilingual SEO is about more than translating your content and specific tasks you need to tick off to make your multilingual site as visible as possible.
¡Vamos !
What is multilingual SEO ?
Multilingual SEO is the process of optimising your website to improve search visibility in more than one language. It involves creating high-quality translations (including SEO metadata), targeting language-specific keywords and building links in the target language.
The goal is to make your site as discoverable and accessible as possible for users searching Google and other search engines in their local language.
It’s worth pointing out that multilingual SEO differs slightly from international SEO, even if the terms are used interchangeably. With multilingual SEO, you are optimising for a language (so Spanish targets every Spanish-speaking country, not just Spain). In international SEO, you target specific countries, so you might have a different strategy for targeting Argentinian customers vs. Mexican customers.
Why adopt a multilingual SEO strategy ?
There are two major reasons to adopt a multilingual SEO strategy : to reach more customers and to deliver the best experience possible.
Reach a wider audience
Not everyone searches the web in English. Even if non-native speakers eventually resort to English, many will try Googling in their own language first. That means if you target customers in multiple non-English-speaking countries, then creating a multilingual SEO is a must to reach as many of them as possible.
A multilingual SEO strategy also boosts your website’s chances of appearing in country-specific search engines like Baidu and Yandex — and in localised versions of Google like Google.fr and Google.de.
Deliver a better user experience
Multilingual SEO gives your customers what they want : the ability to search, browse and shop in their native language. This is a big deal, with 89% of consumers saying it’s important to deal with a brand in their own language.
Improving the user experience also increases the likelihood of non-English-speaking customers converting. As many as 82% of people won’t make a purchase in major consumer categories without local language support.
How to prepare for multilingual SEO success
Before you start creating multilingual SEO content, you need to take care of a couple of things.
Identify target markets
The first step is to identify the languages you want to target. You know your customers better than anyone, so it’s likely you have one or two languages in mind already.
But if you don’t, why not analyse your existing website traffic to discover which languages to target first ? The Locations report in Matomo (found in the Visitors section of Matomo’s navigation) shows you which countries your visitors hail from.
In the example above, targeting German and Indonesian searchers would be a sensible strategy.
Target local keywords
Once you’ve decided on your target markets, it’s time to find localised keywords. Keywords are the backbone of any SEO campaign, so take your time to find ones that are specific to your local markets.
Yes, that means you shouldn’t just translate your English keywords into French or Spanish ! French or Spanish searchers may use completely different terms to find your products or services.
That’s why it’s vital to use a tool like Ahrefs or Semrush to do multilingual keyword research.
This may be a bit tricky if you aren’t a native speaker of your target language, but you can translate your English keywords using Google Translate to get started.
Remember, search volumes won’t be as high as English keywords since fewer people are searching for them. So don’t be scared off by small keyword volumes. Besides, even in the U.S. around 95% of keywords get 10 searches per month or fewer.
Choose your URL structure
The final step in preparing your multilingual SEO strategy is deciding on your URL structure, whether that’s using separate domains, subdomains or subfolders.
This is important for SEO as it will avoid duplicate content issues. Using language indicators within these URLs will also help both users and search engines differentiate versions of your site.
The first option is to have a separate domain for each target language.
- yoursite.com
- yoursite.fr
- yoursite.es
Using subdomains would mean you keep one domain but have completely separate sites :
- fr.yoursite.com
- es.yoursite.com
- de.yoursite.com
Using subfolders keeps everything clean but can result in long URLs :
- yoursite.com/en
- yoursite.com/de
- yoursite.com/es
As you can see in the image below, we use subdomains to separate multilingual versions of you site :
While separate domains provide more precise targeting, it’s a lot of work to manage them. So, unless you have a keyword-rich, unbranded domain name that needs translating, we’d recommend using either subdomains or subdirectories. It’s slightly easier to manage subfolders, but subdomains offer users a clearer divide between different versions of your site.
If you want to make your site even easier to navigate, then you can incorporate language indicators into your page’s design to make it easy for consumers to switch languages. These are the little dropdown menus you see containing various flags that let users browse in different languages.
5 multilingual SEO strategies to use in 2024
Now you’ve got the basics in order, use the following SEO strategies to improve your multilingual rankings.
Use hreflang tags
There’s another way that Google and other search engines use to determine the language and region your website is targeting : hreflang..
Hreflang is an HTML attribute that Google and other search engines use to ensure they serve users the right version of the page.
You can insert it into the header section of the page like this example for a German subdomain :
<link rel=”alternate” href=”https://yourwebsite.com/de” hreflang=”de” />
Or you can add the relevant markup to your website’s sitemap. Here’s what the same German markup would look like :
<xhtml:link rel=”alternate” hreflang=”de” href=”https://yourwebsite.com/de/” />
Whichever method you include one language code in ISO 639-1 format. You can also include a region code in ISO 3166-1 Alpha 2 format. Note that you can include multiple region codes. A web page in German, for example, could target German and Austrian consumers.
Hreflang tags also avoid duplicate content issues.
With a multilingual site, you could have a dozen different versions of the same page, showing the same content but in a different language. Without an hreflang tag specifying that these are different versions of the same page, Google may penalise your site.
Invest in high-quality translations
Google rewards good content. And, while you’d hope Google Translate would be good enough, it usually isn’t.
Instead, make sure you are using professional linguists to translate your content. They won’t only be able to produce accurate and contextually relevant translations — the kind that Google may reward with higher rankings — but they’ll also be able to account for cultural differences between languages.
Imagine you are translating a web page from U.S. English into Italian, for example. You’ve not only got to translate the words themselves but also the measurements (from inches to cm), dates (from mm/dd/yy to dd/mm/yy), currencies, idioms and more.
Translate your metadata, too
You need to translate more than just the content of your website. You should translate its metadata — the descriptive information search engines use to understand your page — to help you rank better in Google and localised search engines.
As you can see in the image below, we’ve translated the French version of our homepage’s title and meta description :
Page titles and meta descriptions aren’t the only pieces of metadata you need to pay attention to. Make sure you translate the following :
- URLs
- Image alt tags
- Canonical tags
- Structured data markup
While you’re at it, make sure you have translated all of your website’s content, too. It’s easy to miss error messages, contact forms and checkout pages that would otherwise ruin the user experience.
Build multilingual backlinks
Building backlinks is an important step in any SEO strategy. But it’s doubly important in multilingual SEO, where your links in your target language also help Google to understand that you have a translated website.
While you want to prioritise links from websites in your target language, make sure that websites are relevant to your niche. It’s no good having a link from a Spanish recipe blog if you have a marketing SaaS tool.
A great place to start is by mining the links of competitors in your target market. Your competitors have already done the hard work acquiring these links, and there’s every chance these websites will link to your translated content, too.
Don’t forget about internal linking pages in the same language, either. This will obviously help users stay in the same language while navigating your site, but it will also show Google the depth of your multilingual content.
Monitor the SEO health of your multilingual site
The technical performance of your multilingual pages has a significant impact on your ability to rank and convert.
We know for a fact that Google uses page performance metrics in the form of Core Web Vitals as a search ranking factor. What’s more, research by WP Rocker finds that a side loading in one second has a three times better conversion rate than a site loading in five seconds.
With that in mind, make sure your site is performing at optimal levels using Matomo’s SEO Web Vitals report. Our SEO Web Vitals feature tracks all of Google’s Core Web Vitals, including :
- Page Speed Score
- First Contentful Paint (FCP)
- Final Input Delay (FID)
- Last Contentful Paint (LCP)
- Cumulative Layout Shift (CLS)
The report displays each metric in a different colour depending on your site’s performance, with green meaning good, orange meaning average, and red meaning poor.
Check in on these metrics regularly or set up custom alerts to automatically notify you when a specific metric drops below or exceeds a certain threshold — like if your Page Speed score falls below 50, for example.
How to track your multilingual SEO efforts with Matomo
Matomo isn’t just a great tool to track your site’s SEO health ; you can also use our privacy-focused analytics platform to track your multilingual SEO success.
For example, you could use the report to focus your multilingual SEO efforts on a single language if searches are starting to rival English. Or you decide to translate your most trafficked English keywords into your target languages, regardless if a tool like Ahrefs or Semrush tells you whether these keywords get searches or not.
If you want to analyse the performance of your new language, for example, you can segment traffic by URL. In our case, we use the segment “Page URL contains fr.matomo.org” to measure the impact of our French website.
We can also track the performance of every language except French by using the segment “Page URL does not contain fr.matomo.org”.
You can use Matomo to track your Keyword performance, too. Unlike search engine-owned platforms like Google Analytics and Google Search Console that no longer share keyword data, Matomo lets users see exactly which keywords users search to find your site in the Combined keywords report :
This is valuable information you can use to identify new keyword opportunities and improve your multilingual content strategy.
For example, you could use the report to focus your multilingual SEO efforts on a single language if searches are starting to rival English. Or you decide to translate your most trafficked English keywords into your target languages, regardless if a tool like Ahrefs or Semrush tells you whether these keywords get searches or not.
For international brands that have separate websites and apps for each target language or region, Matomo’s Roll-Up Reporting lets you keep track of aggregate data in one place.
Roll-Up Reporting lets you view data from multiple websites and apps as if they were a single site. This lets you quickly answer questions like :
- How many visits happened across all of my multilingual websites ?
- Which languages contributed the most conversions ?
- How does the performance of my Spanish app compare to my Spanish website ?
Is it any wonder, then, that Matomo is used by over one million sites in 190 countries to track their web and SEO performance in a privacy-friendly way ?
Join them today by trying Matomo free for 21 days, no credit card required. Alternatively, request a demo to see how Matomo can help you track your multilingual SEO efforts.
-
FFmpeg C Api - Reduce fps but maintain video duration
25 mars 2015, par Justin BradleyUsing the FFmpeg C API I’m trying to convert an input video into a video that looks like an animated gif - meaning no audio stream and a video stream of 4/fps.
I have the decode/encode part working. I can drop the audio stream from the output file, but I’m having trouble reducing the fps. I can change the output video stream’s time_base to 4/fps, but it increases the video’s duration - basically playing it in slow mo.
I think I need to drop the extra frames before I write them to the output container.
Below is the loop where I read the input frames, and then write them to output container.
Is this where I’d drop the extra frames ? How do I determine which frames to drop (I,P,B frames) ?
while(av_read_frame(input_container, &decoded_packet)>=0) {
if (decoded_packet.stream_index == video_stream_index) {
len = avcodec_decode_video2(input_stream->codec, decoded_frame, &got_frame, &decoded_packet);
if(len < 0) {
exit(1);
}
if(got_frame) {
av_init_packet(&encoded_packet);
encoded_packet.data = NULL;
encoded_packet.size = 0;
if(avcodec_encode_video2(output_stream->codec, &encoded_packet, decoded_frame, &got_frame) < 0) {
exit(1);
}
if(got_frame) {
if (output_stream->codec->coded_frame->key_frame) {
encoded_packet.flags |= AV_PKT_FLAG_KEY;
}
encoded_packet.stream_index = output_stream->index;
encoded_packet.pts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);
encoded_packet.dts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);
if(av_interleaved_write_frame(output_container, &encoded_packet) < 0) {
exit(1);
}
else {
current_frame_num +=1;
}
}
frame_count+=1;
av_free_packet(&encoded_packet);
}
}
}