Mailing List Archive

Re: Google Translate now assists with humantranslations of Wikipedia articles
Let me agree with it completely (out of the shadow ;). This feature's aim is
obviously to help understand totally "alien" texts to a certain [at least
minimal?] extent. This whole thing has absolutely nothing to do with
'translation/interpretation' in it's proper sense. It's a pair of crutches
for those, who are otherwise helpless. ;)

B.

-----Original Message-----
From: foundation-l-bounces@lists.wikimedia.org
[mailto:foundation-l-bounces@lists.wikimedia.org] On Behalf Of Peter Gervai
Sent: Wednesday, June 10, 2009 1:28 PM
To: Wikimedia Foundation Mailing List
Subject: Re: [Foundation-l] Google Translate now assists with
humantranslations of Wikipedia articles

On Wed, Jun 10, 2009 at 00:54, masti<mastigm@gmail.com> wrote:
> current level of sophistication of translation tools, especialy of
> languages that do not belog to the same group as english, german,
> french, etc. is completely useless.

Let me disagree. Hungarian is not in the same group by far, and the results
make it possible to understand more than 50% of the text (sometimes I'd say
above 90%). While this is far from proper translation it is by no means
_useless_, since its obvious use is to understand a completely foreign text
to some extents.

And I'd like to second that the quality has been really improving, whether
the state of the art linguistic science backs its theory up or not. This is
observation, and not theory.

But I see this is an exaggeration contest, so I'll go back to the shadow.
:-)

grin

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


__________ ESET Smart Security - Vmrusdefinmciss adatbazis: 4143 (20090610)
__________

Az |zenetet az ESET Smart Security ellenorizte.

http://www.eset.hu




__________ ESET Smart Security - Vírusdefiníciós adatbázis: 4143 (20090610)
__________

Az üzenetet az ESET Smart Security ellenorizte.

http://www.eset.hu



_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
What I see as a great feature in the toolkit is the translation memory: in
practice (after you switch of the machine translation), common phrases in
Wikipedia articles - like "external links", "notes", "history", "early life"
etc. - are pretranslated once a human has already translated them; if more
then one people start working on the same article separately, they can make
use of the other users' translations and build upon them (without having to
explicitly 'collaborate' or 'share' for this function to work).

Also, if you were to translate [[Bird species 1]], [[Bird species 2]],
[[Bird species 3]], I think you would get some very useful suggestions for
translating [[Bird species 4]].

Best,
Bence Damokos

On Wed, Jun 10, 2009 at 1:38 PM, Bennó <benno79@freemail.hu> wrote:

> and totally "alien" texts to a certain [at least
> minimal?] extent. This whole thing has absolutely nothing to do with
> 'translation/interpretation' in it's proper sense. It's a pair of crutches
> for those, who are otherwise helpless. ;)
>
_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
On Wed, Jun 10, 2009 at 14:46, Bence Damokos<bdamokos@gmail.com> wrote:
> What I see as a great feature in the toolkit is the translation memory: in
> practice (after you switch of the machine translation), common phrases in
> Wikipedia articles - like "external links", "notes", "history", "early life"
> etc. - are pretranslated once a human has already translated them; if more
> then one people start working on the same article separately, they can make
> use of the other users' translations and build upon them (without having to
> explicitly 'collaborate' or 'share' for this function to work).

Maybe, but at the very best case it can work for very short passages.
Two or three sentences at most. And it would be taken out of context.

--
אמיר אלישע אהרוני
Amir Elisha Aharoni

http://aharoni.wordpress.com

"We're living in pieces,
I want to live in peace." - T. Moore

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
On Wed, Jun 10, 2009 at 1:56 PM, Amir E. Aharoni <amir.aharoni@gmail.com>wrote:

> On Wed, Jun 10, 2009 at 14:46, Bence Damokos<bdamokos@gmail.com> wrote:
> > What I see as a great feature in the toolkit is the translation memory:
> in
> > practice (after you switch of the machine translation), common phrases in
> > Wikipedia articles - like "external links", "notes", "history", "early
> life"
> > etc. - are pretranslated once a human has already translated them; if
> more
> > then one people start working on the same article separately, they can
> make
> > use of the other users' translations and build upon them (without having
> to
> > explicitly 'collaborate' or 'share' for this function to work).
>
> Maybe, but at the very best case it can work for very short passages.
> Two or three sentences at most. And it would be taken out of context.


If you were working on the very same article, it would obviously be in
context...; and the short phrases tend to be common, especially, considering
that Google treats the target of the links separately which allows for
creating a sort of glossary.

Best,
Bence
_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Bennó wrote:
> Let me agree with it completely (out of the shadow ;). This feature's aim is
> obviously to help understand totally "alien" texts to a certain [at least
> minimal?] extent. This whole thing has absolutely nothing to do with
> 'translation/interpretation' in it's proper sense. It's a pair of crutches
> for those, who are otherwise helpless. ;)
>
>
Sure, but even with 90% accuracy (which is still very low) one needs to
remain aware of the limitations of machine translation. Seeing it as a
crutch is a healthy approach. What needs to be discouraged is the
dangerous techno-pop attitude that there is a machine solution for every
situation, and that machines can find the magic substitute for common sense.

Ec

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
I would just like to point out that every single critic has ignored the
premise that I started this thread with:

"This is a great example of machines helping people help machines help
people."

On Wed, Jun 10, 2009 at 10:53 AM, Ray Saintonge <saintonge@telus.net> wrote:

> Bennó wrote:
> > Let me agree with it completely (out of the shadow ;). This feature's aim
> is
> > obviously to help understand totally "alien" texts to a certain [at least
> > minimal?] extent. This whole thing has absolutely nothing to do with
> > 'translation/interpretation' in it's proper sense. It's a pair of
> crutches
> > for those, who are otherwise helpless. ;)
> >
> >
> Sure, but even with 90% accuracy (which is still very low) one needs to
> remain aware of the limitations of machine translation. Seeing it as a
> crutch is a healthy approach. What needs to be discouraged is the
> dangerous techno-pop attitude that there is a machine solution for every
> situation, and that machines can find the magic substitute for common
> sense.
>
> Ec
>
> _______________________________________________
> foundation-l mailing list
> foundation-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>
_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Brian wrote:
> I would just like to point out that every single critic has ignored the
> premise that I started this thread with:
>
> "This is a great example of machines helping people help machines help
> people."
>
>

I don't disagree with that point, but I often note in real life that
many people who seek help want to substitute that help for any exercise
of their own little grey cells.

I have no problem with using a machine translation as a starting point
because these translations are uncopyrightable beyond pre-existing
copyrights.

Ec

> On Wed, Jun 10, 2009 at 10:53 AM, Ray Saintonge wrote:
>
>> Sure, but even with 90% accuracy (which is still very low) one needs to
>> remain aware of the limitations of machine translation. Seeing it as a
>> crutch is a healthy approach. What needs to be discouraged is the
>> dangerous techno-pop attitude that there is a machine solution for every
>> situation, and that machines can find the magic substitute for common
>> sense.
>>
>> Ec
>>


_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
On Wed, Jun 10, 2009 at 20:01, Brian<Brian.Mingus@colorado.edu> wrote:
> I would just like to point out that every single critic has ignored the
> premise that I started this thread with:
>
> "This is a great example of machines helping people help machines help
> people."

That, again, would be Wikipedia, not Google. No-one knows how these
Google algorithms work, so i can't really know how helpful i am.

--
אמיר אלישע אהרוני
Amir Elisha Aharoni

http://aharoni.wordpress.com

"We're living in pieces,
I want to live in peace." - T. Moore

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
> Let me disagree. Hungarian is not in the same group by far, and the
> results make it possible to understand more than 50% of the text
> (sometimes I'd say above 90%). While this is far from proper
> translation it is by no means _useless_, since its obvious use is to
> understand a completely foreign text to some extents.
>

IMHO automatic translations into Polish are useless, as they only allow rough orientation in the contents of an article. It concerns not only translations from Hungarian (in which part of the words whose Polish counterparts were unknown to the automatic translator were left untranslated or translated into English), but even translations from German. (I was trying articles on the children's literature ;-)

Picus viridis



_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Hoi,
The quality of the translations will vary. There are many reasons for it and
one of the things that will make a difference is the number of people using
the translate tool as a rough first pass. Once this is done, using the
translation functionality will help Google to improve the quality of the
code.

This has been said before, there is no news here. What is relevant however
is that in order to support the languages that have not been supported so
far, there is a need for people actually using this tool to build the
translation corpus that gets you this first pass functionality.

Translation is not something where a silver bullet will provide an "instant
on - high quality" experience and it is the languages that are currently not
supported that have the highest need for tools like this.
Thanks,
GerardN

2009/6/13 picus-viridis <picus-viridis@o2.pl>

> > Let me disagree. Hungarian is not in the same group by far, and the
> > results make it possible to understand more than 50% of the text
> > (sometimes I'd say above 90%). While this is far from proper
> > translation it is by no means _useless_, since its obvious use is to
> > understand a completely foreign text to some extents.
> >
>
> IMHO automatic translations into Polish are useless, as they only allow
> rough orientation in the contents of an article. It concerns not only
> translations from Hungarian (in which part of the words whose Polish
> counterparts were unknown to the automatic translator were left untranslated
> or translated into English), but even translations from German. (I was
> trying articles on the children's literature ;-)
>
> Picus viridis
>
>
>
> _______________________________________________
> foundation-l mailing list
> foundation-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>
_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Gerard Meijssen hett schreven:
> Hoi,
> The quality of the translations will vary. There are many reasons for it and
> one of the things that will make a difference is the number of people using
> the translate tool as a rough first pass. Once this is done, using the
> translation functionality will help Google to improve the quality of the
> code.
>
> This has been said before, there is no news here. What is relevant however
> is that in order to support the languages that have not been supported so
> far, there is a need for people actually using this tool to build the
> translation corpus that gets you this first pass functionality.
>
> Translation is not something where a silver bullet will provide an "instant
> on - high quality" experience and it is the languages that are currently not
> supported that have the highest need for tools like this.
This is interesting. I did not know it's possible to train new
languages. Is there any available information on the requirements? What
requirements need to be met, to make Google support them (so they can be
selected in the drop-down at the translator toolkit)? _How much_ text do
they need as a basis to finally enable the translation function?

(My personal experience with the collaboratetiveness of Google is a bad
one. Although Google is a multi-billion dollar company and [in a fair
world] should actually _pay_ people for things like translating their
interface in as much languages as possible [.as Google with its 80%
search engine market share is one of the most important internet access
vectors and not having a search engine in your language is a big
accessibility barrier] they rather choose to go the cheap way and let
volunteers translate it. That not enough, they have the chutzpa to
_reject_ adding any further languages [.no additions since at least 2007,
although they still support Elmer Fudd, bork bork bork, Klingon and
pirate speak...]. At the moment Google supports the languages of
roundabout 85 to 90% of the world's population and it seems, they don't
care about the rest.)

Marcus Buck

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Hoi,
One of the most important things that is needed for adding languages to a
technology like this is having a sufficiently sized corpus. For general
availability, the expectation for the quality is quite high. To me this
seems to be one reason why Google did not add more languages. Another reason
why many corpora are not big enough is because of the problem of identifying
a text for the language it is written in. When you consider that a few years
ago I learned that only a small percentage of Internet content has the
metadata for the language that is used.. When you then consider that
something like 75% is actually wrong...

Given that Google actually supports MediaWiki, it may be that they are
willing to support our language. The problem however is that many of our
language have illegal and even wrong codes. The consequence is that it is
not obvious to just support our "language". This issue will not be resolved
because people are under the impression that the "community" has the final
word about the names of our languages. This is naive as well as problematic
because it prevents the ease of the argument for Google to support our
languages..
Thanks,
GerardM

2009/6/15 Marcus Buck <me@marcusbuck.org>

> Gerard Meijssen hett schreven:
> > Hoi,
> > The quality of the translations will vary. There are many reasons for it
> and
> > one of the things that will make a difference is the number of people
> using
> > the translate tool as a rough first pass. Once this is done, using the
> > translation functionality will help Google to improve the quality of the
> > code.
> >
> > This has been said before, there is no news here. What is relevant
> however
> > is that in order to support the languages that have not been supported so
> > far, there is a need for people actually using this tool to build the
> > translation corpus that gets you this first pass functionality.
> >
> > Translation is not something where a silver bullet will provide an
> "instant
> > on - high quality" experience and it is the languages that are currently
> not
> > supported that have the highest need for tools like this.
> This is interesting. I did not know it's possible to train new
> languages. Is there any available information on the requirements? What
> requirements need to be met, to make Google support them (so they can be
> selected in the drop-down at the translator toolkit)? _How much_ text do
> they need as a basis to finally enable the translation function?
>
> (My personal experience with the collaboratetiveness of Google is a bad
> one. Although Google is a multi-billion dollar company and [in a fair
> world] should actually _pay_ people for things like translating their
> interface in as much languages as possible [.as Google with its 80%
> search engine market share is one of the most important internet access
> vectors and not having a search engine in your language is a big
> accessibility barrier] they rather choose to go the cheap way and let
> volunteers translate it. That not enough, they have the chutzpa to
> _reject_ adding any further languages [.no additions since at least 2007,
> although they still support Elmer Fudd, bork bork bork, Klingon and
> pirate speak...]. At the moment Google supports the languages of
> roundabout 85 to 90% of the world's population and it seems, they don't
> care about the rest.)
>
> Marcus Buck
>
> _______________________________________________
> foundation-l mailing list
> foundation-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>
_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Gerard Meijssen hett schreven:
> Hoi,
> One of the most important things that is needed for adding languages to a
> technology like this is having a sufficiently sized corpus. For general
> availability, the expectation for the quality is quite high. To me this
> seems to be one reason why Google did not add more languages. Another reason
> why many corpora are not big enough is because of the problem of identifying
> a text for the language it is written in. When you consider that a few years
> ago I learned that only a small percentage of Internet content has the
> metadata for the language that is used.. When you then consider that
> something like 75% is actually wrong...
>
> Given that Google actually supports MediaWiki, it may be that they are
> willing to support our language. The problem however is that many of our
> language have illegal and even wrong codes. The consequence is that it is
> not obvious to just support our "language". This issue will not be resolved
> because people are under the impression that the "community" has the final
> word about the names of our languages. This is naive as well as problematic
> because it prevents the ease of the argument for Google to support our
> languages..
> Thanks,
> GerardM
Your old ISO code hobby horse ;-) I guess, if Google wanted to, they
would be able recognize the languages of our projects. Just like all our
users do too.

> One of the most important things that is needed for adding languages to a
> technology like this is having a sufficiently sized corpus.
Yes, that was basically my main question: What is sufficiently? How much
pages or MB of text? At least the order of magnitude.

Marcus Buck

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Hoi,
The proper use of language codes is indeed a recurring theme. Calling it a
hobby horse gives the impression that it does not have a real world
application. It does have a real world application and one of the problems
with language is that it is truly hard to recognise languages confidently.
Suggesting that Google can because of its size is too easy. I am sure they
would have if they could.
Thanks,
GerardM

2009/6/15 Marcus Buck <me@marcusbuck.org>

> Gerard Meijssen hett schreven:
> > Hoi,
> > One of the most important things that is needed for adding languages to a
> > technology like this is having a sufficiently sized corpus. For general
> > availability, the expectation for the quality is quite high. To me this
> > seems to be one reason why Google did not add more languages. Another
> reason
> > why many corpora are not big enough is because of the problem of
> identifying
> > a text for the language it is written in. When you consider that a few
> years
> > ago I learned that only a small percentage of Internet content has the
> > metadata for the language that is used.. When you then consider that
> > something like 75% is actually wrong...
> >
> > Given that Google actually supports MediaWiki, it may be that they are
> > willing to support our language. The problem however is that many of our
> > language have illegal and even wrong codes. The consequence is that it is
> > not obvious to just support our "language". This issue will not be
> resolved
> > because people are under the impression that the "community" has the
> final
> > word about the names of our languages. This is naive as well as
> problematic
> > because it prevents the ease of the argument for Google to support our
> > languages..
> > Thanks,
> > GerardM
> Your old ISO code hobby horse ;-) I guess, if Google wanted to, they
> would be able recognize the languages of our projects. Just like all our
> users do too.
>
> > One of the most important things that is needed for adding languages to a
> > technology like this is having a sufficiently sized corpus.
> Yes, that was basically my main question: What is sufficiently? How much
> pages or MB of text? At least the order of magnitude.
>
> Marcus Buck
>
> _______________________________________________
> foundation-l mailing list
> foundation-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>
_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Gerard Meijssen hett schreven:
> Hoi,
> The proper use of language codes is indeed a recurring theme. Calling it a
> hobby horse gives the impression that it does not have a real world
> application. It does have a real world application and one of the problems
> with language is that it is truly hard to recognise languages confidently.
> Suggesting that Google can because of its size is too easy. I am sure they
> would have if they could.
> Thanks,
> GerardM
>
Let's assume Google wants to build an Alemannic translation tool. They
are searching for an Alemannic text corpus. Will they fail to find the
Alemannic Wikipedia cause 'als' stands for a form of Albanian? I don't
think so.

Don't understand me wrong, I am _pro_ the use of correct codes and I
would reject the opinion, that projects have the right to decide to
stick to a wrong code. But I also reject to switch projects to codes
that don't match the project ('gsw' for example is no proper substitute
for 'als') and I reject code switches that do harm to the projects (that
means that the old code has to be a redirect to the new code at least
for several years).
And most importantly I think, that the question of ISO codes is not
related to Google's operations. If Google wants to use Wikipedia content
to improve their tools it should be really easy for them to do the code
mapping (e.g. 'no'->'nb').


So does anybody know how big a corpus must be to be helpful to Google?

Marcus Buck

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
It depends on how much a priori knowledge you have about the languages.
For the moment people tend to go into two camps, those who want to use
statistical engines and those who want to go for rule based engines.
According to one person there are some activity to include rules into
statistical engines and vica verca but it still needs a lot of work.

Identifying a language isn't that difficult in itself, most search
engines are quite good at that. Many engines can even be told to
interpret the text according to a specific language so the problem is
basically non existent for us.

Still, because our articles has a lot of text that isn't part of a
single language, and in addition there are also specialized markup,
there should be done some kind of parsing before the translation engine
starts processing the text.

After some discussions last winter I am quite sure a rule based engine
work best for small languages, but that a working solution should use
some kind of self learning mechanism to refine the translation or at
least identify errors.

Our idea was to use statistics to identify cases where existing rules
failed, and let people define the new rules. Failing rules would be
detected by checking which translated sentences got changed afterwards.
Actually it is a bit more difficult than this,.. ;)

And no, I'm not a linguist...

John

>>> One of the most important things that is needed for adding languages to a
>>> technology like this is having a sufficiently sized corpus.
>> Yes, that was basically my main question: What is sufficiently? How much
>> pages or MB of text? At least the order of magnitude.
>>
>> Marcus Buck

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Actually, Google added... Pirate and Montenegrin.

Mark


On Mon, Jun 15, 2009 at 4:43 AM, Marcus Buck<me@marcusbuck.org> wrote:
> Gerard Meijssen hett schreven:
>> Hoi,
>> The quality of the translations will vary. There are many reasons for it and
>> one of the things that will make a difference is the number of people using
>> the translate tool as a rough first pass. Once this is done, using the
>> translation functionality will help Google to improve the quality of the
>> code.
>>
>> This has been said before, there is no news here. What is relevant however
>> is that in order to support the languages that have not been supported so
>> far, there is a need for people actually using this tool to build the
>> translation corpus that gets you this first pass functionality.
>>
>> Translation is not something where a silver bullet will provide an "instant
>> on - high quality" experience and it is the languages that are currently not
>> supported that have the highest need for tools like this.
> This is interesting. I did not know it's possible to train new
> languages. Is there any available information on the requirements? What
> requirements need to be met, to make Google support them (so they can be
> selected in the drop-down at the translator toolkit)? _How much_ text do
> they need as a basis to finally enable the translation function?
>
> (My personal experience with the collaboratetiveness of Google is a bad
> one. Although Google is a multi-billion dollar company and [in a fair
> world] should actually _pay_ people for things like translating their
> interface in as much languages as possible [.as Google with its 80%
> search engine market share is one of the most important internet access
> vectors and not having a search engine in your language is a big
> accessibility barrier] they rather choose to go the cheap way and let
> volunteers translate it. That not enough, they have the chutzpa to
> _reject_ adding any further languages [.no additions since at least 2007,
> although they still support Elmer Fudd, bork bork bork, Klingon and
> pirate speak...]. At the moment Google supports the languages of
> roundabout 85 to 90% of the world's population and it seems, they don't
> care about the rest.)
>
> Marcus Buck
>
> _______________________________________________
> foundation-l mailing list
> foundation-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Mark Williamson hett schreven:
> Actually, Google added... Pirate and Montenegrin.
>
> Mark
I first asked them in 2007 to add my language. They told me, no further
languages would be added at the moment and they would inform me, if that
changed. I asked them again in 2008 and 2009. One time they answered not
at all and the other time they said nothing had changed.
Pirate of course is an important addition... And Montenegrin surely was
a good measure to endear oneself to the Montenegrin government.

Marcus Buck

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
Дана Saturday 13 June 2009 18:20:36 picus-viridis написа:
> IMHO automatic translations into Polish are useless, as they only allow
> rough orientation in the contents of an article. It concerns not only

How is rough orientation in the contents of an article useless?

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
2009/6/21 Nikola Smolenski <smolensk@eunet.yu>:
> Дана Saturday 13 June 2009 18:20:36 picus-viridis написа:

>> IMHO automatic translations into Polish are useless, as they only allow
>> rough orientation in the contents of an article. It concerns  not only

> How is rough orientation in the contents of an article useless?


It's not useless, but it's not all that useful. I find when
translating from other Wikipedias to add to the English version of an
article that it's the subtle and important details that get mashed to
uncertainty.


- d.

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Re: Google Translate now assists with humantranslations of Wikipedia articles [ In reply to ]
It also depends on the language pair. For Chinese to English, I
wouldn't even bother with such a process (having a machine translate
and then correct the errors); for Spanish to English I do this very
frequently and it's a great timesaver.

Mark

skype: node.ue



On Sun, Jun 21, 2009 at 7:05 AM, David Gerard<dgerard@gmail.com> wrote:
> 2009/6/21 Nikola Smolenski <smolensk@eunet.yu>:
>> äÁÎÁ Saturday 13 June 2009 18:20:36 picus-viridis ÎÁÐÉÓÁ:
>
>>> IMHO automatic translations into Polish are useless, as they only allow
>>> rough orientation in the contents of an article. It concerns šnot only
>
>> How is rough orientation in the contents of an article useless?
>
>
> It's not useless, but it's not all that useful. I find when
> translating from other Wikipedias to add to the English version of an
> article that it's the subtle and important details that get mashed to
> uncertainty.
>
>
> - d.
>
> _______________________________________________
> foundation-l mailing list
> foundation-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>

_______________________________________________
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l