From: M. Taylor Saotome-Westlake Date: Sun, 10 May 2020 06:00:35 +0000 (-0700) Subject: gearing up for "Sexual Dimorphism in Yudkowsky's Sequences" X-Git-Url: http://232903.hjopswx29.asia/source?a=commitdiff_plain;h=44057a45831c0af4ab0a569fba4ced2bdc3cfa03;p=Ultimately_Untrue_Thought.git gearing up for "Sexual Dimorphism in Yudkowsky's Sequences" --- diff --git a/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md index cdb59f1..491e21c 100644 --- a/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md @@ -24,7 +24,7 @@ I haven't been doing so well for a lot of the last ... um, fifteen-ish months? [ But this blog is not about _not_ attacking my friends. This blog is about the truth. For my own sanity, for my own emotional closure, I need to tell the story as best I can. If it's an _incredibly boring and petty_ story about me getting _unreasonably angry_ about philosophy-of-language minutiæ, well, you've been warned. If the story makes me look bad in the reader's eyes (because you think I'm crazy for getting so unreasonably angry about philosophy-of-language minutiæ), then I shall be happy to look bad for _what I actually am_. (If _telling the truth_ about what I've been obsessively preoccupied with all year makes you dislike me, then you probably _should_ dislike me. If you were to approve of me on the basis of _factually inaccurate beliefs_, then the thing of which you approve, wouldn't be _me_.) -So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _systematically correct reasoning_. Starting with the shared canon of knowledge of [cognitive biases](https://www.lesswrong.com/posts/jnZbHi873v9vcpGpZ/what-s-a-bias-again), [reflectivity](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection), and [Bayesian probability theory](http://yudkowsky.net/rational/technical/) bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)—and [not just out of a duty towards some philosophical ideal of Truth](https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth), but as a result of understanding _how intelligence works_—[the reduction of "thought"](https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design) to [_cognitive algorithms_](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms). Intelligent systems that construct predictive models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals. +So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _systematically correct reasoning_. Starting with the shared canon of knowledge of [cognitive biases](https://www.lesswrong.com/posts/jnZbHi873v9vcpGpZ/what-s-a-bias-again), [reflectivity](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection), and [Bayesian probability theory](http://yudkowsky.net/rational/technical/) bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)—and [not just out of a duty towards some philosophical ideal of Truth](https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth), but as a result of understanding _how intelligence works_. Oh, and there was also [this part about](https://intelligence.org/files/AIPosNegFactor.pdf) how [the entire future of humanity and the universe depended on](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) our figuring out how to reflect human values in a recursively self-improving artificial superintelligence. That part's complicated. diff --git a/content/drafts/sexual-dimorphism-yudkowskys-sequences-and-me.md b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md similarity index 58% rename from content/drafts/sexual-dimorphism-yudkowskys-sequences-and-me.md rename to content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md index 39b14d5..7ba3f1b 100644 --- a/content/drafts/sexual-dimorphism-yudkowskys-sequences-and-me.md +++ b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md @@ -1,20 +1,29 @@ Title: Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems Date: 2021-01-01 Category: other -Tags: autogynephilia, epistemic horror, my robot cult, personal, sex differences +Tags: autogynephilia, Eliezer Yudkowsky, epistemic horror, my robot cult, personal, sex differences Status: draft -[TODO: robot cult backstory] +So, as I sometimes allude to, I've spent basically my entire adult life in this insular intellectual subculture that was founded in the late 'aughts to promulgate an ideal of _systematically correct reasoning_—general methods of thought that result in true beliefs and successful plans—and, incidentally, to use these methods of systematically correct reasoning to prevent superintelligent machines from [destroying all value in the universe](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile). Lately I've been calling it my "robot cult" (a phrase [due to Dale Carrico](https://amormundi.blogspot.com/2011/08/ten-reasons-to-take-seriously.html))—the pejorative is partially ironically affectionate, and partially an expression of bitter contempt acquired from that time almost everyone I used to trust insisted on selectively playing dumb about our own philosophy of language in a way that [was optimized for](TODO: linky "Algorithmic Intent") tricking me into cutting my dick off (independently of the empirical facts that determine whether or not cutting my dick off is actually a good idea). + +But that's a _long story_—for another time, perhaps. For now, I want to explain how my robot cult's foundational texts had an enormous influence on my self-concept in relation to sex and gender. + +It all started in summer 2007 when I came across _Overcoming Bias_, a blog on the theme of how to achieve more accurate beliefs. (I don't remember exactly how I was referred, but I think it was likely to have been [a link from Megan McArdle](https://web.archive.org/web/20071129181942/http://www.janegalt.net/archives/009783.html), then writing as "Jane Galt" at _Asymmetrical Information_.) + +[Although](http://www.overcomingbias.com/author/hal-finney) [technically](http://www.overcomingbias.com/author/james-miller) [a](http://www.overcomingbias.com/author/david-j-balan) [group](http://www.overcomingbias.com/author/andrew) [blog](http://www.overcomingbias.com/author/anders-sandberg), the vast majority of posts on _Overcoming Bias_ were by Robin Hanson or Eliezer Yudkowsky. I was previously acquainted in passing with Yudkowsky's [writing about future superintelligence](http://yudkowsky.net/obsolete/tmol-faq.html). (I had [mentioned him in my Diary once](/ancillary/diary/42/), albeit without spelling his name correctly).) Yudkowsky was now using _Overcoming Bias_ and the medium of blogging [to generate material for a future book about rationality](https://www.lesswrong.com/posts/vHPrTLnhrgAHA96ko/why-i-m-blooking). Hanson's posts I could take or leave, but Yudkowsky's sequences of posts about rationality (coming out almost-daily through early 2009, eventualy totaling hundreds of thousands of words) were _life-changingly great_, drawing on fields from [cognitive psychology](https://www.lesswrong.com/s/5g5TkQTe9rmPS5vvM) to [evolutionary biology](https://www.lesswrong.com/s/MH2b8NfWv22dBtrs8) to explain the [mathematical](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [principles](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [governing](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) _how intelligence works_—[the reduction of "thought"](https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design) to [_cognitive algorithms_](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms). Intelligent systems that use [evidence](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) to construct [predictive](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals. I would later frequently [joke](https://en.wiktionary.org/wiki/ha_ha_only_serious) that Yudkowsky rewrote my personality over the internet. + Ever since I was fourteen years old— (and I _really_ didn't expect to be blogging about this eighteen years later) -(I _still_ don't want to be blogging about this, but it actually turns out to be relevant to the story about trying to correct a philosophy-of-language mistake) +(I _still_ don't want to be blogging about this, but unfortunately, it actually turns out to be central to the intellectual–political project I've been singlemindedly focused on for the past three and a half years because [somebody has to and no one else will](https://unsongbook.com/chapter-6-till-we-have-built-jerusalem/)) -—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this, some time. I don't think the details are that important here? Maybe read the ["Man, I Feel Like a Woman" TV Tropes page](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) and consider that the page wouldn't have so many entries if some male writers didn't have a reason to be _extremely interested_ in _that particular fantasy scenario_. -So, there was that erotic thing, which I was pretty ashamed of at the time, and _of course_ knew that I must never tell a single soul about. (It would have been about three years since the fantasy started that I even worked up the bravery to tell my Diary about it, in the addendum to entry number 53 on 8 March 2005.) + +—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this. I don't think the details are that important here? Maybe read the ["Man, I Feel Like a Woman" TV Tropes page](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) and consider that the page wouldn't have so many entries if some male writers didn't have a reason to be _extremely interested_ in _that particular fantasy scenario_. + +So, there was that erotic thing, which I was pretty ashamed of at the time, and _of course_ knew that I must never tell a single soul about. (It would have been about three years since the fantasy started that I even worked up the bravery to [tell my Diary about it](/ancillary/diary/53/#first-agp-confession).) But within a couple years, I also developed this beautiful pure sacred self-identity thing, where I was also having a lot of non-sexual thoughts about being a girl. Just—little day-to-day thoughts. Like when I would write in my pocket notebook as my female analogue. Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _y_, _z_)](/images/handwritten_phrase_jazzy_puppy.jpg) because I thought my handwriting look more feminine. [TODO: another anecdote, clarify notebook] @@ -26,6 +35,7 @@ The beautiful pure sacred self-identity thing doesn't _feel_ explicitly erotic. [section: Overcoming Bias rewrites my personality over the internet; gradually getting over sex differences denialism] + The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2) portrays an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because [evolution didn't design women and men to be optimal partners for each other](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), and the AI is prohibited from editing people's minds, the happiness-maximizing solution ends up splitting up the human species by sex and giving women and men their own _separate_ utopias, complete with artificially-synthesized romantic partners. At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back eleven years later (my deconversion from my teenage religion being pretty thorough at this point, I think), the _argument makes sense_ (though you need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other). @@ -52,4 +62,4 @@ In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/ (To which I now realize the correct answer is: Yes, it's fucking cheating! The map is not the territory! You can't change the current _referent_ of "personal identity" with the semantic mind game of declaring that "personal identity" now refers to something else! How dumb do you think we are?! But more on this later.) -changing emotions/accent fantasies: https://www.greaterwrong.com/posts/wAW4ENCSEHwYbrwtn/other-people-s-procedural-knowledge-gaps/comment/pheakgvLbFndXccXC \ No newline at end of file +changing emotions/accent fantasies: https://www.greaterwrong.com/posts/wAW4ENCSEHwYbrwtn/other-people-s-procedural-knowledge-gaps/comment/pheakgvLbFndXccXC diff --git a/notes/sexual-dimorphism-in-the-sequences-notes.md b/notes/sexual-dimorphism-in-the-sequences-notes.md new file mode 100644 index 0000000..139f957 --- /dev/null +++ b/notes/sexual-dimorphism-in-the-sequences-notes.md @@ -0,0 +1,24 @@ +Incidents to Include— + +Done(ish)— +✓ "Failed Utopia 4-2" +✓ "Changing Emotions" + +Easy— +* love of a man for a woman, and vice versa as separate fragments of value +* Psychological Unity of Humankind is only up to sex +* Superhappies empathic inference for not wanting to believe girls were different + +Harder— +* "I often wish some men/women would appreciate" +* "The Opposite Sex" (memoryholed, but there should be an archive) +* EY was right about "men need to think about themselves _as men_" (find cite) +* wipe culturally defined values +* finding things in the refrigerator + + +My ideological committment to psychological-sex-differences denialism made me uncomfortable when the topic of sex differences happened to come up on the blog—which wasn't particularly often, but in such a vast, sprawling body of work as the Sequences, it occasionally turned out to be relevant in a discussion of evolution or human values. + +For example, + + "the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other or from any other value."